Inteligencia Artificial (IA)
Step-by-step guide to install Gemma 4 locally on iOS and Android.
Gianro Compagno
2026-04-18
5 min read
In the current landscape of artificial intelligence, proprietary models like GPT, Gemini, Claude, or Grok dominate the scene, accessible through their official applications or via API integration. However, those seeking greater autonomy and control find open AI models to be a more flexible and customizable alternative. Notable examples include Meta's LLaMA, Mistral AI, Alibaba's Qwen, and Google's Gemma, all of which can be installed and adapted on personal devices, using personal data without relying on original providers.
While working with open models may seem complex, tools like Ollama or LM Studio simplify the process for PC or Mac users, both in home and professional environments. In the case of Gemma 4, Google's open model, the company has launched an experimental application that allows any Android or iPhone to become an AI testing lab: Google AI Edge Gallery, available for free.
Google AI Edge Gallery, compatible with iOS and Android, enables large language models (LLMs) to run directly on mobile devices, a technology that millions already use daily in assistants like ChatGPT or Gemini. To leverage open LLMs, an intermediary app is needed, such as Ollama, LM Studio, or Google's own AI Studio, which works from any web browser.
The power of current smartphones and the efficiency of modern AI models make it possible to run LLMs on mobile devices. Thus, applications like Google AI Edge Gallery democratize access to these technologies, allowing any user to experiment with models like Gemma 4 on their phone.
Installing Google AI Edge Gallery is straightforward: it is available on Google Play and the App Store, compatible with Android 12 or higher and iOS 17 and above. Additionally, on its GitHub repository, Google provides a detailed guide for those who prefer to install the app via APK or on enterprise devices.
Within the app, there are two ways to install Gemma 4. The first is through the Models menu, where users can select and download the desired version based on device capability. Gemma 4 E2B, with 2 billion parameters and a size of 2.5 GB, is ideal for modest mobiles and IoT devices, offering speed and low resource consumption in exchange for a more limited context. On the other hand, Gemma 4 E4B, with 4 billion parameters and 3.6 GB, requires more powerful hardware but provides more coherent responses and superior reasoning.
The second option is from the main screen of the app, where various functions that AI models can perform are presented: from conversational chat (AI Chat) to audio transcription and translation (Audio Scribe) or object recognition in images (Ask Image). By selecting AI Chat, recommended models are displayed, and users can download their preferred one, with warnings about hardware requirements if necessary.
Once the model, such as Gemma 4, is downloaded, it can be tested in the different available functions, with the promise that more will be added in the future. Installing Gemma 4 via Google AI Edge Gallery is an accessible and secure way to explore the potential of open AI models, with the advantage of operating locally and without sharing data with Google, unless the user decides otherwise.
Source: hipertextual.com