Opera’s New Feature: Download and Use LLMs Locally

Opera's New Feature Download and Use LLMs Locally

Opera has recently announced a feature that allows users to download and use Large Language Models (LLMs) locally on their computers. This innovative feature is initially rolling out to Opera One users who receive developer stream updates.

It enables users to choose from over 150 models from more than 50 families, including notable models like Llama from Meta, Gemma from Google, and Vicuna.

This feature is part of Opera’s AI Feature Drops Program, which aims to give users early access to some of the latest AI features.

Opera is utilizing the Ollama open-source framework in the browser to run these models on users’ computers. Currently, the available models are a subset of Ollama’s library, but Opera plans to include models from different sources in the future.

Each model variant requires more than 2GB of space on the local system, so users need to be mindful of their device’s storage capacity.

Opera’s VP, Jan Standal, mentioned that while the current models are sizeable, they may reduce in size as they become more specialized for specific tasks.

This feature is particularly useful for users who want to test various models locally. However, for those looking to save space, there are many online tools like Quora’s Poe and HuggingChat to explore different models.

Opera has been integrating AI-powered features into its browser since last year. The company launched an assistant called Aria in the sidebar in May and introduced it to the iOS version in August. In January, Opera announced the development of an AI-powered browser with its own engine for iOS, following the EU’s Digital Market Acts (DMA) directive for Apple to allow alternatives to the mandatory WebKit engine for mobile browsers.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *