Liquid Glass and the end of flat design?

I’ve always been a huge fan of the skeuomorphic design of the earlier iPhones, they seemed so quirky, and full of character. I loved all the little details, like how the reflections on the chrome volume slider would react and shimmer, as if light were really reflecting off as you moved the phone around. Or how the notes app had little torn pieces of paper from previous notes. I even loved the Rich Corinthian Leather™ of the calendar app, complete with the fancy stitching, it felt luxurious, especially on the first high-DPI “Retina” displays.

When iOS7 came along back in 2013 it was quite a departure from what we had become used to, with seemingly all of that personality thrown away. Gone was all the depth, texture, and character, and in its place we had super thin text, an endless sea of white backgrounds with no shadows. It felt almost “clinical”. This “flat design” has dominated the industry ever since, and although Apple did walk back some of the more extreme design choices in later software releases, I still missed some of the quirky and whimsy designs we used to see; apps that felt like they became the thing they were imitating.

There’s a great quote I read in a blog post called “The Death of Design” that captures how I feel about what we lost with the move to flat design:

We designed things just to see what they might look like. A calendar app, a music player, a weather app made of chrome and glass. Were they practical? Not always. But they were fun. And expressive. And strangely personal. We shared them on Dribbble, rebounded each other’s shots, iterated, and played.

I remember spending hours on the tiniest details of an interface element no one had asked for. Just… exploring. Pushing pixels for the sake of it.

So when rumours started circling that Apple was going to be redesigning things this year, I was excited to watch last week’s WWDC 2025 Keynote to see what they had been working on, as they unveiled this new design language for all of their operating systems.

“Liquid Glass” as they are calling it, seems to be a step in the right direction, where the user interface is composed of pieces of glass that reflect the world around them, responding to and reacting with light how it would behave in the real world.

As an example, we can see a button here reflecting the yellow of the control above it, as two objects would do in real life:

There are also some really nice playful elements, for example the way folders open and close as you hover over them, and convey their state with subtle clues in their icons, for example when you place files in an empty folder, the icon changes to show that, with a little bouncy animation. Elements feel quite expressive and organic. Liquid is the perfect term for it. It’s taking what we’ve seen from the Dynamic Island on iOS, and applying it to the whole system.

I’ve seen people referring to this new expressive era of design as “Neumorphism” or “Physicality”, and I can highly recommend reading Sebastiaan de With’s excellent blog post on the topic of Physicality, over at the Lux blog.

After watching the Keynote, I do have a couple of concerns. The first one is around text legibility. In some of the examples they gave, the text was quite hard to read, especially with busy scenes where your content is very visible through the glass UI controls. Some apps feel better than others in this regard, so I’m hoping that will improve during the betas.

The second concern is around information density, which still seems to be under attack. Apple is clearly going all-in on “consistency” across all their platforms, but I worry that we end up losing something in the process. For example, on macOS we now have iOS-style alerts which are narrow, and stack all the buttons vertically. This feels like an odd choice for a desktop OS which is used exclusively with big widescreen displays. Why not play to the strengths of each platform?

Overall though, I’m really excited to see how this all ends up coming together, especially when they’ve finished polishing it for release this autumn, and I can’t wait to try it out for myself.

Visual Voicemail is actually pretty great

Yes, I know. I’m sure some of you will be thinking this is a bit of a random post, as you double-check your calendars to make sure that it is indeed the year 2025. Don’t worry, I’m not going to start talking about fax machines and record players, but I wanted to share some recent experiences with good old Visual Voicemail.

History

I remember my first mobile phone had voicemail, but the process to listen to your messages was a bit tedious. If someone left you voicemail, the network would send you a text to let you know, along with the number that called. As this didn’t link to your address book, you’d have no idea who it was unless you’d memorised all your friends’ phone numbers, so you’d then have to dial the voicemail service to listen to the actual message.

After navigating through the various menus, the caller’s number would be read out slowly, one… number… at… a… time, and finally you’d get to hear the message itself. After struggling to decipher the low quality recording, and assuming you even recognised the voice of the person, you could then return their call. And that’s assuming the message you wanted to hear was the first one in the list. You may have had to listen to several before you got to the one you wanted. Not really the best experience.

When the original iPhone was revealed back in 2007, one of the features they mentioned for the “Phone” app, was Visual Voicemail, which Steve Jobs described as “Random Access Voicemail”, and it worked more like email than the system we were used to. New voicemails would just show up in the inbox for you to listen to, in whichever order you wanted, and you could easily scrub through the audio, then return the call with a single tap if you liked. It even let you record a custom greeting right there in the “Phone” app. The difference was night and day.

There was only one slight complication – this feature required support from the mobile carriers to work; it wasn’t something that could be done entirely on device. Here in the UK the iPhone launched on the O2 network, so they supported Visual Voicemail from day one, but incredibly there are still some networks that don’t support it, even after the iPhone stopped being exclusive to O2.

SIM hopping

For various years I’ve been hopping between networks to take advantage of the best deals, and there’s one thing that I always, ALWAYS notice, and that’s whether the network supports Visual Voicemail. Every time I try a carrier that doesn’t support it, I think maybe I can live with it, maybe the deal is so good it’s worth the trade-off, but every time I try, I’m wrong.

Issues with signal quality are surprisingly common in the UK, so missing a call on a network without Visual Voicemail feels a bit like being thrown back in time. Dialling a clumsy old voicemail service rather than having my messages magically appear as soon as I get better signal, always leaves me pining for this simple little service from the original iPhone.

And it’s not that I’m some sort of celebrity, dealing with hundreds of calls a day! It’s just that the way we communicate with our friends and family has changed. These days we tend to text more than we call, keeping in touch with our loved ones via group chats and photo sharing. But this means if someone does call when I’m out of service, it’s usually for something important that I don’t want to miss.

Taking it to the next level

Since I’ve recently returned to a network that supports Visual Voicemail, there were some new features introduced back in iOS 17 that I had wanted to take for a spin, and I’ve found them to be really useful.

Live Voicemail

This feature is really handy for screening unknown numbers. The way it works is your phone answers the call, but Siri does the talking, asking the caller to leave a message, as if they had reached voicemail. But as the caller is speaking, their words are transcribed in real-time, so you can see on the lock screen what the call is about. You can even decide to pick up if it’s a call you actually want to take.

Transcriptions

The transcriptions captured from Live Voicemail are saved in the Voicemail tab of the Phone app, but what about regular Visual Voicemail? Well iPhone automatically transcribes those as they are delivered from the network, so you’ll get a transcription no matter which method you use. I’ve found it very useful, and the quality seems pretty good, with only occasional mistakes, which are usually self-explanatory based on the rest of the message.

Getting out of the way

Overall, these features work so well together, they make it a breeze to catch up on any calls I miss when I’m out and about, and are a great example of technology helping to make things easier, rather than getting in the way. For me, that is truly technology at its best.

As an aside, I wonder how many younger people today know that the voicemail icon represents an old reel of audio tape? 3D-printed save icon anyone?

Not so smart assistants?

I really like using Siri to get things done when I’m not able to use my phone normally. For example, when cooking I can quickly add things to my shopping list as I’m using them, so I’ll remember to buy more the next time I’m at the supermarket. Or if I’m driving somewhere, I can easily control my music, or reply to a friend to let them know if I’m running late.

When Siri works, it’s brilliant, but there are times it can be incredibly frustrating to use.

On it… still on it…

Every year at WWDC we hear from Apple that Siri can do more and more things entirely on-device without needing the Internet, but in practice it still seems to suffer from connection issues (even when all my other devices are fine). This usually manifests as Siri responding with the phrase:

On it….. still on it….. something went wrong!

As soon as Siri answers any request with “on it…” I know with 100% certainty that the request is going to fail. Even worse, if you immediately ask Siri to do the same thing again, it will then typically succeed! I really wish Siri would just retry the request itself silently, and save me from hearing that dreaded phrase again.

Split personality

I have a couple of HomePod minis (or is it HomePods mini?), one in the living room and one in the kitchen. When cooking it’s handy to set various timers, so obviously I ask Siri to do that, but if I go into the living room, and ask Siri to check on the status of the timer, it acts like it has no idea what I’m talking about.

Me: “Siri, how long’s left on the timer?”
Siri: “There are no timers on HomePod.”
Me: *sigh* “Siri, how long’s left on the kitchen timer?”
Siri: “There are no timers on HomePod.”
Me: *SIGH* *walks to Kitchen* “Siri, how long’s left on the timer?”
Siri: “There’s a timer with 4 minutes left.”
Me: (╯°□°)╯︵ ┻━┻

I found something on the web

I’ve also had interactions where Siri gives me an example of some phrases I can use, only for it to turn around and say it has no idea what I’m talking about when I try to use them. Or it just abandons any attempt at understanding you and does a web search for what you asked. This usually isn’t very helpful, and it’s completely pointless on HomePod, given it lacks a display. Siri will chastise you in that case, and tell you to “ask again from your iPhone”.

When it comes to memory, sometimes it will forget what you were talking about mere seconds earlier, forcing you to repeat your request in full, trying to get the syntax correct. It’s like typing into a command line, rather than having a conversation.

By comparison, when this does work it feels so much more natural. Asking about the weather, then following that up with “and what about tomorrow?” flows quite nicely. It can also be quite clever, for example, if you’re asking about “tomorrow”, but the time is after midnight, it will check if you actually meant today, which is probably what most people would mean in that case.

SiriGPT?

Can an LLM like ChatGPT help here? I’ve seen a few articles this week claiming that’s exactly what Apple is working on for iOS 18, and I think it would make a big difference. ChatGPT is already so far ahead of Siri simply in terms of how natural sounding the conversations with it can be. They can be quite convincingly real.

I think it would substantially improve the experience if Apple could integrate those conversational features into Siri, but they will need to be very careful to handle the fact that LLMs hallucinate a lot, which is to say they can generate output that sounds plausible, but is either factually incorrect or totally unrelated.

Although Apple hasn’t jumped on the current AI bandwagon yet, they’ve actually been using machine learning (ML) technology in their products for a while now. They tend to use ML in more subtle ways, such as separating a subject from the background allowing portrait mode to be applied to your photographs, or in real-time during video calls. It also powers the Visual Look Up feature that helps you identify people, animals, plants, and more. There are tons of little features like that throughout Apple’s operating systems that rely on ML behind the scenes.

The good news is Apple’s privacy focus, and the presence of the Neural Engine in all their CPUs, means they are able to run a lot of the ML models entirely on-device. I’d expect no less from a next-generation Siri, and for a smart assistant with so much access to your personal data, this can only be a good thing.

The Sum of its Parts

I recently started a new job, and one of the upsides is that my computer isn’t locked down into oblivion, so I can actually use a lot of the features that make the Apple ecosystem so great to begin with!

Universal Clipboard now works properly, so setting up things like my HR profile was as easy as copying the image I wanted from my phone, then pasting it on the new Mac. It made the setup process so much faster and smoother.

Reminders sync properly so I can create a “Work” list, and add things to that list as they pop into my head. Or quickly add a personal reminder if something comes up during the work day. The old way involved me sending an email to myself, either at my work address, or my personal address, depending on the subject. Then I’d “process” it the next time I was on whichever device had the relevant mailbox configured. Yeah I know 🙈

I can now make use of separate profiles in Safari, to keep personal stuff and work stuff in their own sandboxes, but if there’s something like a bookmark I need, which is in another profile, I can easily find it without much friction. A useful tip here is that you can configure Safari to always open specific websites in certain profiles. I use that to make sure any YouTube links I click on open in my Personal profile, where I am subscribed to YouTube (who wants to see that many ads?!).

It also allows me to bring my collection of useful apps along with me, without needing to buy them all again every time I change jobs, as well as benefit from any subscriptions I have.

Being able to use my messaging apps again means I’m not stopping throughout the day to get my phone out and respond to friends and family. I can quickly respond on the Mac when needed, and then continue with my work without losing my momentum.

Finally, I can access my music streaming without having to fiddle with my phone. It can be a hassle to switch audio between my Mac for calls and then back to my phone for music, now it’s all in one place and much simpler.

Overall, I used to face all these little points of friction throughout my day, but now they’re gone. It made me think of the old saying:

The whole is greater than the sum of its parts.

Those individual elements aren’t revolutionary on their own, but when they work together smoothly across all my devices like this, it really feels like the technology is serving me, not the other way around.