Google will release three new AI experiments on Tuesday aim to help people learn new languages in a more personalized way. Although the experiment is still in its early stages, the company may want to do Duolingo with the help of Google's multi-modal large language model Gemini.
The first experiment helps you quickly learn specific phrases you need right now, while the second experiment can help you sound less formal and more local.
The third experiment allows you to learn new words based on your surroundings using your camera.
Google points out that one of the most frustrating parts of learning a new language is when you find yourself in need of specific phrases you haven't learned yet.
With the new "small lesson" experiment, you can describe a situation like "finding a lost passport" to get tailored vocabulary and grammatical techniques. You can also get suggestions about responses like “I don’t know where I lost it” or “I want to report it to the police.”
The next experiment "language" hopes to help people not sound like textbooks when speaking new languages. Google says that when you learn a new language, you often learn to speak formally, which is why it is trying a way to teach people to speak more popularly and locally.
With this feature, you can create realistic conversations between native speakers and see how the conversation conveys one message at a time. For example, you can learn through a conversation where a street vendor is chatting with a customer or reunite with a pair of two long-lost friends on the subway. You can linger on unfamiliar terms to understand what they mean and how they are used.
Google says experiments occasionally abuse certain words and sometimes make up for words, so users need to cross-references with reliable sources.
The third experiment, "Word Cam", allows you to take photos of your surroundings, and then Gemini will detect objects and mark them in the language you learn. This feature also provides you with other words that you can use to describe an object.
Sometimes you just need words to the things in front of you because it can show you how much you don't know yet, Google says. For example, you may know the word "window", but you may not know the word "blind".
The idea behind these experiments is to understand how to use AI to make independent learning more dynamic and personalized.
The new experiment supports the following languages: Arabic, China (Hong Kong), Chinese (Taiwan), English (AU), English (UK), English (United States), English (US), French (Canada), France (France (France), German, Greek, Greek, Hebrew, Hebrew, Hebrew, Hebrew, Hindi, Italian, Italian, Japanese, Korean, portean, portean, portuguese (portugauge), portuguese (portugauge) (portugauge) (portugauge) (portugauge) (portugauge), and Turkey. Tools can be accessed through Google Labs.