3 Things You Should Know About Apple’s AI Model MGIE

MGIE allows you to edit images by just typing text commands.


With the shocking release of Samsung S24 AI features, almost everyone we know is either discussing it or ready to pledge their allegiance. Now, the spotlight turns to Apple as they embark on their own journey of AI innovation. Introducing MGIE!


What is MGIE?

(Image credit: sir-apfelot.de)

MGIE stands for Multimodal Guided Image Editing, and it’s a system that uses machine learning to edit images based on natural language instructions. This feature was recently released by Apple in February 2024 and, is poised to change the way we edit photos.

Though MGIE is not a finished product, it demonstrates exceptional ability in turning simple and ambiguous text commands into precise instructions, which helps with communication in the photo editing process, as reported by Lowyat.net.


What does it do?

Operating on its multimodal large language models (MLLMs), MGIE generates a new image that adheres to your instructions. From basic adjustments like contrast and brightness to more intricate tasks such as altering hair, eyes, or even adding toppings to a pizza, MGIE offers a wide array of editing capabilities.


How can you use it?

(Image credit: neurohive.io)

Imagine you have an image of a cute cat and wish to enhance it using MGIE. You can input natural language directives like "Make the cat look happy" or "Add a hat to the cat" and MGIE will seamlessly execute your commands, transforming your vision into reality.


Writer’s note

While MGIE may not be available on any of your current Apple devices, its release signifies a promising future offering from the brand. Despite being in its infancy stage, there’s no doubt that it can create realistic and creative edits that match your intent.

Currently, you can learn more about MGIE and try it out on Apple’s GitHub page.

Leave a comment

All comments are moderated before being published