vintagedave a day ago

The worst of it is even the Pictures Under Glass have got worse. They used to be designed to look like things, so you’d see something that looked pushable and push (tap) it.

So even if there was no tactile engagement there was visual engagement communicating tactile interaction.

Now even that’s gone. Our Pictures Under Glass are bland and flat and look like translucent acrylic and, well, glass.

  • Lalabadie a day ago

    Back when Apple announced "3D Touch" (by name before showing it), I dearly hoped it was _finally_ introducing above-the-screen finger detection. Instead it was pressure sensitivity, and even that got dropped a few years back.

    I'm still hoping for a hover-like mechanism that allows touchscreens to have non-commital interactions. As you say, touch is surprisingly devoid of depth for its maturity and time on the market.

    I'm hoping some level of above-the-screen finger detection comes eventually, because if it gets to a mainstream distribution stage, it opens the door for bleeding-edge to aim for larger detection distances, and eventually seeing hand gestures above the screen.

    Whether it's in this form or something completely different, our screen interactions need to be broadened, especially in a direction that allows users to signal intent but not commitment (like hovering with a mouse, or looking with eye tracking).

    • mncharity 20 hours ago

      > it opens the door for bleeding-edge to aim for larger detection distances, and eventually seeing hand gestures above the screen [...] our screen interactions need to be broadened [...] signal intent but not commitment (like hovering with a mouse, or looking with eye tracking)

      My laptop used to have several cameras (flopping up from behind screen) for 3D hand tracking, (crufty) keycap touch, and head/gaze pose. With shutter glasses, overlaying a slightly-raised transparent top-down view from a keyboard cam was helpful and tolerable. Browser-based stack. Input fusion with diverse latencies required app state rollback. Fun(ish).

      > _finally_ [...] even that got dropped a few years back [...] eventually [...] broadened

      This was profoundly frustrating. Waiting for a couple of giant companies, interested only in extremely difficult mass consumer markets, to make tech available so it can afterward be explored and nitches probed, with patents shackling everyone else, was just... sigh. Pharma turning the nation's explore-exploit dial hard over regrettably leaves a lot of paths forward nonviable.

big-green-man 2 days ago

I like this perspective and I share it. It frustrates me to watch the initial Demo of Apple's concept of a touch oriented computer, seeing multi touch demonstrated, moving objects around, and to sit here typing this on something like it, except it's one finger at a time and multitouch was dumbed down into pinch to zoom and little else. I can't stand the fact that almost all computer interfaces outside smartphones are interacted with using only the right wrist and index finger. The future is what we have now but transparent, what we have now with no bezels, what we have now but made of disposable paper, what we have now but the size of a table. It's just not compelling to me.

And it's because it's hard imagining a completely new way of interacting with computing technology. People that are able to do that don't get paid script writers salaries, they get paid Steve Jobs salaries. And they are worth billions to organizations capable of making them a reality.

  • jonbell a day ago

    Funny side note: he was paid $1 of salary

dang a day ago

Related:

A brief rant on the future of interaction design (2011) - https://news.ycombinator.com/item?id=31511361 - May 2022 (132 comments)

A brief rant on the future of interaction design (2011) - https://news.ycombinator.com/item?id=21116948 - Sept 2019 (153 comments)

A Brief Rant on the Future of Interaction Design (2011) - https://news.ycombinator.com/item?id=6325996 - Sept 2013 (35 comments)

A Brief Rant on the Future of Interaction Design - https://news.ycombinator.com/item?id=3212949 - Nov 2011 (150 comments)

seism 2 days ago

When I am forced to use a phone or tablet of a brand and size I'm not used to, everything seems wonky and out of place. Yet I can adapt to a different brand and size of car in a few minutes. Replacing tactile memory with skeuomorphism, is like a prosthetic arm that can extend 10m to replace your sickly feely biological one. I'm always up for advocating for more arts & crafts as part of interaction design.

ChrisMarshallNY a day ago

I had the privilege to work with one of the best Interaction Design teams around (DDO). It really opened my eyes to the process.

I enjoyed this rant. I know it's old, but this is the first time I've seen it.

bbor a day ago

This is incredible work, thanks for sharing. That said, does anyone know what kind of technology they’re gesturing to? Maybe I’m really stuck in the present, but short of tangible holograms I’m not sure how we’d make tactile computing devices.

The keyboard is tactile, and I’ve long dreamed of augmenting Alexa by covering my house in purpose-specific buttons, but that seems like a short road. Even the most advanced HCI device I’ve ever seen proposed by a consumer company—Meta’s Orion wristbands that read hand movements by measuring the electrical signals in your wrist—aren’t tangible in the slightest.

What am I missing? Can any fellow futurists point me in the right direction? I don’t need “doable today” or even “the technology is worked out”, but ideally I’m looking for something more doable than tangible holograms. See https://xkcd.com/678/

EDIT; the VR suits from the Three Body Problem books come to mind, using tiny actuators and temperature controls (thermal actuators?) to simulate touch. I could see that in a glove for sure, and that’s probably the most futuristic tactility gets at this point, but I doubt it’ll ever see fully-body usage, both for technical feasibility and user convenience reasons. There’s a reason the TV show replaced them with brain-modulating headsets… I guess that’s really the end dream.

What is a tangible hologram (or any interface, really) other than an illusion, at the end of the day?

  • _rpf a day ago

    Bret Victors current project is Dynamicland, a real space he’s been building to explore computing within the physical world, with humans at its center. The system he invented is known as Realtalk, and is integrated into the building, where cameras and projectors engage with the humans within it.

    https://dynamicland.org/

  • alanbernstein a day ago

    > a dynamic medium we can see, feel and manipulate

    I wonder if this is a "pick 2" situation.

    Touchscreen: see and sorta manipulate

    Orion: manipulate (in only the weak sense that it uses more than one finger)

    Braille display, haptic glove/suit: feel and see (redundantly?)

    In my touchscreen-biased mind, I can't imagine a technology that could do all three of these, which I would want to use regularly or carry around, or wear for hours at a time.

    The touchscreen, combining the "see" with at least a tiny bit of the "manipulate", is actually an amazing step, but it may be a dead end. I've seen research into things like tactile screens, with adjustable surface height. But the benefit it gives is evidently not worth the cost in development, manufacturing, or complexity.

    • bbor 20 hours ago

      Thanks for taking the time, really thought provoking… Will definitely be kicking that around in my head. No counterpoints.

      I was going to correct you that Orion is just a baby step towards Minority Report holograms (aka plain ol’ non-tangible holograms), and that reminded me: the famous opening scene had lots more than holograms!

      Specifically, they had physical objects (harddrives, knobs, spheres, etc) that didn’t seem to have much electronics actually in them, but that were tracked and responded-to by the computing system in a dynamic way. For example, easily passing data between the hologram and a little ball that you can easily move to another display, give to someone else, etc. in a completely seamless (read: AI-managed) way.

      That might be my biggest hope for the future, while we’re waiting on tactile gloves. This kind of thing: https://www.behance.net/gallery/105147921/Apple-x-Procreate-...

      • alanbernstein an hour ago

        Counterpoint to myself: Accelerometer-based inputs might demonstrate that I'm wrong.

        In the Zelda games on the Switch, you aim the bow and arrow with motion controls. This may be the smoothest, most natural interface I can think of; changing back to joystick controls feels really bad.

        The input is 2DOF manipulation, and the feedback is primarily seen on the screen. There is a small amount of "feel" feedback inherent to the physical motion of the controller.

        Another example on the Switch is a "labyrinth" balance minigame in Kirby.

        I recall early iPhone games exploring this UI, but it doesn't seem to have developed into anything except games and levelling utility apps.

  • mncharity 15 hours ago

    > Maybe I’m really stuck in the present, but short of tangible holograms I’m not sure how we’d make tactile computing devices.

    >>> what I was thinking of [...] How could one possibly fundamentally expound upon the keyboard or mouse?

    You're wearing a VR headset, so the visual UX of keyboard and mouse can be 3D anything. The keyboard and surround is all multitouch. The keyboard is n-key rollover - feel free to chord. The mouse is tracked in 3D. It's also on a motorized arm, so it has force feedback, and you can leave it wherever, or have it move itself. You have two of them. Your hands are tracked in 3D, so feel free to gesture, and to interact with the visual environment. If you value the fingertip feel of your keyboard, you can stick with fingernail vibrators. The mouse surface is variously pressure sensitive. The mouse surface is vibrator chips. The keys are covered with vibrators. The keys are multi-axis pressure sensitive. Your palm and back of hand are covered with vibrators. The headset provides a high-resolution soundscape. The headset tracks head and gaze. Any screen is notional, as the headset provides high resolution content with any apparent focus depth. The keyboard is on a robot arm, so you can stand or pace - just ask, and it will be at hand. For some tasks, like sorting piles of objects, you might briefly prefer a different interface device.

    Much of that is current-ish tech. Though painful to gather and make comfortable, given a dysfunctional market for such.

    Contrast the available design richness of phone vs desktop, interface and apps. Imagine desktop absorbing lessons where phone is better. Now try to imagine the design richness available to desktop vs something, where something isn't pretending to be a half-century old terminal emulating a manual office typewriter.

  • RevEng 21 hours ago

    I don't know about full 3D manipulation, but we used buttons and switches and levers for decades. They worked great. They are easy to see, feel, and manipulate. They can't move or change, so muscle memory develops naturally. You can use them while not looking and with gloves on - essential when driving or operating other dangerous machinery. The natural constraints of them meant that they were ready to discover and learn.

    Over time we got better at packing more into a button. Multimodal buttons became a thing, and once alphanumeric displays become practical, a single knob or arrow keypad to scroll through tiny menus became common. Twenty years after that, everything is a flat, touch sensitive button, and more and more we hide them from even being visible until they are interacted with.

    I believe the reasons for this are two fold. First, they are futuristic. People are drawn to new, shiny things, even if they are functionally worse. The second is cost. Physical switches are quite expensive compared to digital electronics. As touchscreens have become cheaper, they have replaced more physical inputs. Similarly, they have replaced discrete indicator lights, meters, and other single purpose indicators.

    The cost savings is modest in terms of an entire product, but businesses will spend days saving one cent in production costs because it adds up over millions produced. It usually amounts to a small percentage of profits, but they'll chase anything.

    • bbor 20 hours ago

      Great points, it just occurred to me that I’m biased towards high level applications! HCI equally applies to our appliances, factories, and vehicles to name a few, and I absolutely agree that physical affordances are not quite as outmoded as Musk might say they are.

      Fun fact that we don’t talk about enough: I’m not sure if it’s still happening, but at some point, the controls in the spaceX shuttles were touchscreens running javascript. It brings me joy (and obviously fear!) to imagine an npm node_modules directory makings it to space… I think the consensus is spot on that there can absolutely be too many touchscreens.

      More fundamentally your answer doesn’t really speak to what I was thinking of, though: general computing. Buttons and switches are great for purpose-built machines, but the applicability to general computing is minimal. How could one possibly fundamentally expound upon the keyboard or mouse?