iOS 18.2/macOS 15.2 Review: Picture not so perfect?
Apple Intelligence is back, and this time it’s visual. With the iOS 18.2, iPadOS 18.2, and macOS 15.2 updates, Apple is rolling out its second round of generative AI features, including its first image-related features like Genmoji, Image Playground, and Image Wand.
These features are, on the whole, more ambitious than the initial batch released back in in October, and some of them build on those features: for example, the ability to now generate specific changes to text in Writing Tools. This also marks the first third-party integration of generative AI features into Apple’s own platforms, with the ability to connect to ChatGPT.
Apple Intelligence features are also expanding geographically with these releases, coming to more versions of English, and there are even a few non-AI related features in the mix too, such as improvements to AirPlay and a new mail categorization feature on iOS.
But does this latest round of AI features move the needle in Apple’s quest to improve its users lives? Let’s delve in and see.
There’s a Genmoji for thatOf the image-related features of Apple Intelligence, none have seemed more promising than Genmoji. The feature—which is available on iOS 18.2 and iPadOS 18.2 but not, alas, macOS 15.2—promises to let users create endless custom emoji that blend perfectly in with Apple’s own offerings, simply by providing a text prompt. You can use them just as you might any other emoji (at least, in iMessage conversations): including inline, as stickers, and even as tapbacks.
The reality is, as with all things generative AI, more complex. While Genmoji can deliver pretty well on relatively simple prompts—it created a reasonably good “duck detective” for me, for example—it often requires some massaging to get exactly the right result, and the options it provides can often end up being bizarre. Equally likely are failed attempts to get it to generate exactly what you want; not because of guardrails or inappropriateness, per se, but merely because Apple Intelligence doesn’t seem to understand what you’re asking.
I did find the user interface is a little clunky at times. By tapping on a button beneath the generated image you can choose between creating an emoji using the usual LEGO person color, but also—not as with Image Playground—pattern it after an actual person in your contacts. This is both more and less successful, to my mind: in the former case because it is super cartoony and looks more like a caricature than anything else, but as a counterpoint, it is also still creepy and weird at times. I don’t particularly like the way that the emoji version of me looked, and while you can tweak the starting image, I think I concluded that I simply didn’t care for the way the algorithm interpreted pictures of me. And there’s really nothing I can do about that.
Behold, an assortment of Genmoji I’ve created, ranked from best to worst: egret, fox with sunglasses, duck detective, chef kissing with okay sign, shrimp on toast, Dan podcasting, Scottish man playing the bagpipes, smart thinking, black and white cookie, eyeballs looking at clock.All of that said, I have definitely gotten some entertainment out of Genmoji, even if only for the mental puzzle-solving of how to craft the right query to generate a specific image: say “Jason Snell arms crossed saying no” (pro tip: tapping and holding on a Genmoji in the emoji keyboard will show you the query used to create it). Even the bad ones are often worth sharing just for a laugh about how weird AI is: witness my attempt at creating a “smart thinking” emoji that instead created a terrifying lightbulb creature. I do appreciate that if you don’t like a particular image, you can swipe endlessly to have Genmoji keep creating new options, even if they are often all of a piece.
In the end, Genmoji feels largely harmless. Unlike other AI image generation tools, the limitations of its style means it’s not particularly ripe for abuse1 and it also has the benefit of not feeling like it’s going to put swaths of artists out of work; the only people at risk are Apple’s own emoji designers, and I think the company is smart enough to realize that what pops out of its own AI engines isn’t half as good as what actual people will create. But it also feels like it may be little more than a novelty. Six months from now, I wonder how many of the Genmoji I’ve created will actually still be in regular usage.—Dan Moren
Take a slide on the Image Playground Using individual tokens (left) you can build images based on specific people.Image Playground is a new Apple-built app that generates images based on your prompts. Like Genmoji, Image Playground can use people from your Photo library to generate images based on them. The images that are created conform to either a Pixar-style animation or hand-drawn illustration style.
The images need to be modeled on people in your library—there’s no facility to generate a generic person like “man with top hat” or “hockey player”—instead, it’s got to be an image of someone you know. Not only is that frustrating, but the quality of the portraits of my friends and family also leave a lot to be desired. Some of them look sort of like the people I know, but most of the time they seem sort of horrific and awful. So Image Playground really wants to make caricatures, and they’re generally not great. Certainly, the quality of these images is way behind where other AI-based image generators are today.
I don’t have a lot to praise about Image Playground engine itself, but I think its interface is actually pretty good. To build an image, you keep adding keywords to a sort of “concept cloud”, and then see the results. At any time, you can click on one item and make it disappear, and the image will be generated without that keyword being involved. It’s a clever way of visualizing how data feeds into a generative model.
I’m also impressed that the app doesn’t just return a single image with a declaration that the job’s done. Instead, generated images appear in a special view that lets you swipe to additional images generated with the same prompt. The entire interface is about auditioning different generated images until you find one that satisfies you—which is exactly the right way to approach a system that’s as scattershot as AI-based image generation.
It makes me think that Apple’s app interface design game is still strong—but its image-generation game is still severely lacking. Image Playground will allow you to embarrass yourself and your friends until the novelty wears off, but it needs to be much better (and more flexible) to have a fighting chance of remaining remotely relevant.—Jason Snell
Wave your Image Wand Using keywords and source drawings (the small insets on each image) as inputs, Image Wand generates images that more or less reflect the original drawing.The new Image Wand tool lets you turn a rough sketch into a more detailed image. To begin generating an image, you select the new Image Wand tool from the Apple Pencil tools palette and circle a sketch you’ve made that needs an A.I. upgrade. You’ll then be asked to describe your image, using text.
This is important: While your sketch is one input to the eventual generated image, the Image Wand model needs a text description, too. That can feel a little like cheating, but think of it this way: unlike Image Playground, which gives you no compositional control over what images get generated, Image Wand lets you combine your image request with a specific pencil sketch, so ideally the result will look more like you want it to.
In my tests, the sketch really helped—though it wasn’t a miracle worker. I drew a car going down a road into the sunset, a football zipping from left to right, a frog on a lily pad, a brown bear wearing a baseball cap, and even a triangular spaceship that I drew endlessly when I was in elementary school, and for the most part Image Wand gave me options that were in the general vicinity of the bare-bones sketches.
As with all Apple Intelligence image generation tools, you can add more text prompts in the hopes of steering the output, though it had less effect in Image Wand than elsewhere. It’ll also keep giving you options, so you can swipe to see other attempts to generate your requested image. When you accept the image, it’s swapped into your notes right where your sketch had previously been.
I’m not sure how I feel about Image Wand. It can’t work miracles, but it can do the equivalent of taking a sketch on the back of a napkin and making something that looks serviceable. There’s probably a place for that in some workflows, like corporate presentations and for-placement-only images inside design comps. Or just to make the contents of your own notes prettier.
Other Apple limitations also apply. I tried to draw a generic person with a top hat and beard, and was utterly rebuffed. Apple seems so blithely willing to create images representing real people in Genmoji and Image Playground, but it’s not capable of drawing a generic Abe Lincoln? I don’t really get it.—JS
Sir, ChatGPT; ChatGPT, Siri Too often Siri gets in the way of ChatGPT. (You can override this by explicitly asking Siri to use ChatGPT.This new wave of features also includes connectivity with ChatGPT for the first time. It includes the ability for Siri queries to be passed to ChatGPT, which will happen dynamically based on the type of query. There are also new generative features in Writing Tools, which I’ll discuss later on.
Apple lets you choose how you want to use ChatGPT. You can refuse to use it at all, use it in free mode (you’re subject to limits on the number of ChatGPT queries per day, with information about those limits displayed in the Siri & Apple Intelligence section of Settings), or link it to a ChatGPT account and gain all the benefits of being a free or paid ChatGPT user, including more queries and a history list. By default, Apple will ask you before sending any queries out to ChatGPT, though this is a setting you can turn off if you’re comfortable using the service.
This integration still feels like a work in progress. I asked Siri numerous questions that should’ve triggered ChatGPT, but didn’t—and Siri’s answers were often much worse than what ChatGPT would offer. I kept feeling like Siri was just getting in my way when I knew a better data source existed just beyond reach. (Aggressively asking Siri to use ChatGPT was a workaround.)
I tried a whole lot of queries, but just as one example, when I asked Siri what Buster Posey’s lifetime WAR was, it told me about a skirmish called “The Posey War.” When I told it to ask ChatGPT, it gave me the answer: 44.8. (I also asked both agents if Posey ever played with other future major leaguers when in college, and both gave me incorrect answers and, later, ChatGPT also gave me a correct answer. As always, you must verify every single “fact” an AI tells you.)—JS
Writing Tools to orderThis update adds some more of the text input, free-association flair frequently connected to large language models. For example, Writing Tools—which in the first-wave feature release mostly let you tap on different buttons to make changes to your text—now has a custom text input field. When you select some text and bring up Writing Tools, you can tap to enter text to describe what you want Apple Intelligence to do to modify your text. For example, I selected this paragraph and then typed “make this funnier.”
Here’s what it did:
This update sprinkles some more of the text input, free-association magic that large language models are known for. For instance, Writing Tools—which in the first-wave feature release mostly let you tap on different buttons like a text-editing ninja—now has a custom text input field. When you select some text and summon Writing Tools, you can tap to enter text to describe what you want Apple Intelligence to do to your text. For example, I selected this paragraph and then typed “make this funnier,” hoping the AI would turn it into a comedy masterpiece.
There’s also a new Compose command that directly pipes your requests to ChatGPT, allowing you to use that model’s features to write new text for you. I asked it to compose an eight-line poem about LLMs in the style of Dr. Seuss, and here’s what it gave me:
In a world of zeros and ones, they grow, Large language models, with info to show. They chatter and babble in digital lands, Processing words with their virtual hands.
In bytes and bits, they learn and they speak, From vast data trees, answers they seek. With flair and with fun, they mimic and think, Crafting tales and truths in a digital blink.
(You see, when language models battle, it’s a language-model battle. And when they battle in a bottle it’s a language model bottle battle. And…)
This update also adds support for a new Writing Tools API that allows third-party apps that don’t use Apple’s standard text controls to still offer Writing Tools as a feature, which is helpful! I’ve seen Writing Tools crop up in numerous apps that didn’t support it in previous versions, and that’s a good thing.
In general, Writing Tools is a good feature for people who struggle with the written word, and these updates make it a little better. More than anything, I’m encouraged that Apple has iterated on a feature it just shipped in the last update. It makes me hopeful that we’ll see Apple continuing to push and improve all of the Apple Intelligence features over time.—JS
Let’s get Visual, Visual Who knows what color it is!Integrating artificial intelligence with camera functionality is nothing new for Apple: the company’s been building machine learning into its photo pipeline for years, including the ability to search your photo library for images containing, say, a dog or cat, or even to get information about a location, plant, or person in a picture. But with iOS 18.2 running on an iPhone 16 or 16 Pro, Apple’s taking this a little further with Visual Intelligence, which can give you information based on what the camera is looking at right now.
The feature, which is summoned by pressing and holding the Camera Control button, provides three buttons: a camera shutter, Ask, and Search. Of them Search is the most basic: like Google’s own search app, you can tap it and get results based on what’s in the image. Using it on one of my kid’s toy trains, for example, popped up eBay listings for other toy trains, including at least one of the same model. It’s not a new idea, but it can be handy if you’re just looking for a quick result.
The Ask button takes advantage of the aforementioned ChatGPT integration to allow you to ask questions about what’s in the image, which you can do by typing or dictating. So, for example, I could ask “What kind of train is this?” and it would send that query to ChatGPT and then provide a text box with the answer. I also tried it out by pointing it at a bag of coffee and inquiring how dark the roast was, then asked it to describe what a certain book was about just from looking at the cover.
As long as it doesn’t judge it by this…As you might expect, the answers to simple questions are fielded rather well. The train description was relatively accurate (as long as the train text was right side up—when it was upside down, ChatGPT mistakenly identified it as an MTA train) though it could not divine the color for some reason; the description of the coffee was correct based on information I checked on the maker’s website; and the book summary was reasonable, if unremarkable. It did, however, think a wooden toy banana was an eraser shaped like a banana, so your mileage, as ever, may vary.
You can also tap the shutter button in Visual Intelligence to take a picture of something and then use Ask or Search, if you don’t want to stand there pointing your phone at an object. This mode will also try to divine what information you might want and provide contextual options, though with occasionally comical results. For example, when I snapped a picture of the toy train, it offered to translate the text on top, which it interpreted as MVTA because it believed it was Russian. It’s not wrong—the character “B” in Russian is equivalent to the English “V”, but I am not sure what context clues led it to guess that the text was in need of translation.
I have some quibbles about Visual Intelligence. One is the point of the feature: while I’m sure there is utility to being able to summarize a book or movie from a picture, I struggle to find where this feature would actually fit into my life as opposed to, say, doing a Google search. What would actually make it useful is more direct links to the rest of Apple’s ecosystem. For example, if I could point it at a book while browsing in a store and say “Add this to my to-read list” or take a picture of a restaurant and add it to a Maps collection of places I want to eat, then that might be handy. But it’s a weird rare siloed experience that’s bereft of connections to the rest of Apple’s ecosystem—pictures you take with it aren’t saved to your camera roll and can’t even be shared to apps likes Notes or texted to a contact. For that you have to switch back to the regular camera interface.
Visual Intelligence occasionally has some odd missteps.My second concern is that the addition of this capability to the Camera Control button continues to overload that feature. You press and hold Camera Control to bring up Visual Intelligence, but if you forget and click the button and then press and hold, you’ll start taking a video instead—you have to lock the phone and start over to get to Visual Intelligence. Honestly, I forgot this feature even existed for a while and I had to look up how to use it and not the other aspects of Camera Control. If only there were another control on these phones that you could assign to Visual Intelligence specifically…say, a button that let you take an action. But you won’t find any other way to get at Visual Intelligence: there’s no option in Control Center or access via Shortcuts or even the ability to ask Siri to open Visual Intelligence. Maybe in a future version of iOS this will be better integrated with the rest of Apple’s platform. Right now, Visual Intelligence is far more of a proof of concept more than an actual feature.—DM
Non-intelligence featuresIn addition to the latest Apple Intelligence features, these platform updates also include a handful of other improvements that don’t fall under the AI banner. A few of these are pretty substantive—Mail categorization—while others are just nice little enhancements.
Mirror, mirror, just show this appIf you’ve ever had to connect your Mac to a TV or projector—whether by connecting a cable or using AirPlay—and been annoyed that it’s an all-or-nothing proposition, macOS 15.2 brings a nice improvement in that regard: you can now choose not only to mirror or extend your display, but also to share a specific app or window.
There’s now default preference available in System Settings > Displays that allows you to set whether it will default to mirroring the entire screen, extending your desktop, prompting you for an app or window, or asking you what you want to do. In my tests, though, setting “Ask What To Show” as the default didn’t seem to work—it stayed on whatever the previous setting was that I’d used.
This dialog makes it much easier to not accidentally share the wrong part of your screen.You can also change this on the fly using the Screen Mirroring menu bar control or clicking Screen Mirroring in Control Center. You’ll see a thumbnail of what the external display is showing, along with the option to Change the mode or Stop sharing. Choose to share a window or an app and your Mac will show a prompt at the top of its main display, along with a floating dialog on each window that lets you choose to mirror that window or all the windows from that app; there’s also a handy Mission Control button to show you all the windows open on your machine.
The handy thumbnail in the Screen Mirroring menu, also available via Control Center, shows you what’s on your external display.Mirroring multiple windows from an app is a little bit weird. You see them all on a black background, at whatever size they are on your Mac, and you can use Mission Control as usual or even use the new window tiling features—it’s basically like sharing your Mac screen but with everything else blacked out. Windows will also get the purple “sharing” badge, previously used to indicate that they were being shared via software like Zoom, which allows you to turn off the sharing in addition to other window management controls.
I also appreciate that when you aren’t sharing anything, there’s now a default screen on the external display saying “Choose to Mirror from the menu,” which feels a little better than just having the display mirrored by default, and accidentally sharing your email or messages or whatever’s open.
In general, this is a welcome addition to a feature that’s always felt a little bit barebones on the Mac. It’s far more powerful and transparent about what’s getting shared, and for those who frequently do presentations, this will be a substantial improvement.—DM
Categorically true-ishAs a longtime Mail user, I’m always prepared to welcome any substantive feature addition with open arms; Apple’s email client can go years without any significant changes. So I was excited at the prospect of mail categorization, which brings a feature that competitors, including Gmail, have offered for many years.
With categorization on, your inbox is now divided into four categories: Primary, Transactions, Updates, and Promotions. That’s it; you can’t change those categories or add your own. Apple will automatically filter your various messages into those categories, and you can choose whether your Mail app badge (if you have that feature on) shows you only unread messages in Primary or in all categories.
The automatic filtering is…fine. Not unlike Apple’s spam filtering, I find it hit or miss. You can, at least, re-categorize a message if you don’t agree with how it’s been tagged, though it can be a little tricky to find that option at first. You either have to swipe left on the message, tap the three dots icon, and choose Categorize Sender; or you have to tap into the message thread, then tap the three dots at the top right to find the same command. At that point, your choices are either sticking with Apple’s Automatic mode or having it manually categorized as one of the other options. Keep in mind that it seems to be sender specific, meaning that if you get multiple types of emails from the same “source”, they need to be categorized separately. (For example, an email about a sale from a company you’ve shopped at should be filtered into Promotions while a shipping notification should go to Transactions.)
Recategorizing a sender is relatively easy once you find the control.These categories are, to a degree, what you make of them. Should an email from a conference I attended go in Promotions or Updates? What about emails from the online trivia league I’m in? A lot of it is coming up with a system and just kind of sticking with it. It’s certainly not going to solve all your inbox bloat. Moreover, I don’t want to spend my time categorizing email senders—if I did, then I would have just created a complicated folder hierarchy decades ago. I’ve also found that more of the spam that does make it through Apple’s filters ends up in my Primary inbox, which isn’t great either.
The other part of this feature is that messages from the same source are now grouped together. This seems to happen only in the non-Primary inboxes. The idea being that if you have a ton of, say, issues of a newsletter you subscribe to, you can quickly scan back through them in one thread rather than having them spread willy-nilly through your inbox. This can be handy if you’re looking for something from a particular sender.
I’m not entirely sure what I’m supposed to use this menu for.I do find some of the interface choices puzzling, though. For example, at the top of each thread there’s a header with the name of the sender and an icon. (Apple said when it announced this feature that senders will be able to add their own custom icons to help verify their identity, but this doesn’t seem to have been widely adopted yet.) Below the name there is a subhead that says something like “18 promotions • 46 messages.” Tapping on that lets you toggle between “Show All Messages” and “Promotions.” I assume that means it will show you messages from that sender that aren’t categorized as Promotions…but I’m confused as to why such a disparity exists. It seems as though it groups multiple senders from a single domain, but it’s not well explained.
One unintended benefit of mail categorization that I have appreciated: when you swipe to delete a message entry in one of the category inboxes, it offers to delete every message in that particular thread. This is a great way to clear out old messages from your inbox: I don’t need newsletters dating back years or old promotions that I’ve forgotten to delete.
The biggest oversight of this feature, however, is that it is inexplicably iOS only. So you can get used to being able to categorize your inbox…but only on your iPhone. On your Mac and, yes, even on your iPad, you’ll have the same old single inbox you’ve always had. I don’t know what the hold-up is, but the fact that it’s not on iPadOS suggests that Apple is perhaps wrestling with the interface more than the underlying technology—perhaps it hasn’t quite nailed down how this looks in that multi-pane UI.
Despite my misgivings, I do appreciate that Apple is attempting to help us quell the tide of emails, and I hope this feature will help in this regard. Next do Messages!—DM
Weather or not, here it comesWho doesn’t love having the temperature in their menu bar? Not these people. Now that there’s a Weather app on the Mac, Apple’s gotten into the game too: you can now put the current temperature and conditions in your menu bar.
Weather at a click!It’s all pretty straightforward and basic: you’re not going to find color-coded temperatures or UV levels, or even your super local microclimate here. Click on the icon and you’ll see more detail, including the high and low temperatures for the day, any current alerts, some upcoming hourly forecasts, and temperature and conditions in your other saved locations (the top five, anyway—more are hidden below a collapsible Other Locations option). Selecting any of those locations will open the Weather app with the relevant place open. Choosing the alerts will take you to the Apple WeatherKit page with details, and choosing the hourly forecast will open the Weather app to your current location and pop up the Conditions.
For most people, this will be a fine and welcome addition—it’s almost a surprise that it took this long for it to show up. If you want something more customizable, well, you probably know where to get it or are happy making your own version.
My only minor complaint: figuring out how to turn on this feature! You won’t find it anywhere in the Weather app; instead you have to open System Settings, select Control Center, and scroll all the way to the bottom to find Weather and choose “Show in Menu Bar.” It’s the only one of those options to be from a specific app, and it really should be available in the app’s settings too.—DM
If you appreciate articles like this one, support us by becoming a Six Colors subscriber. Subscribers get access to an exclusive podcast, members-only stories, and a special community.