AKA launched Muse V2 – the alpha engine, a next-generation adaptive-understanding AI engine that can make better and more seamless AI communication. AKA has the goal of building an AI engine that can apply “automatic adaptation” of users’ data and “predict users’ responses” based on the generated data. Therefore, there are two major updates for Muse V2. First, context scoring to improve engine performance and architecture efficiency. Second, the engine is now more personalized. A new feature is added where users can select the difficulty of responses determined by the CEFR (Common European Framework of Reference for Languages) and, thus, allow adjusting the level of the engine’s responses to suit every user’s English level.
To improve the systematic way to evaluate the performance of the dialogue systems, the data team analyzed the generated responses on the scale of “unacceptable” to “excellent”. After making improvements to the backend NLP system, the performance increased about 30%.
There are also improvements with smart data preprocessing. In the raw data collected, there is quite a bit of unbalance between negative and positive labels. To mitigate this, AKA used a subsampling technique that lowers the amount of data but improves the quality of the model.
Adjusting the level of Muse’s responses to suit the user’s level of English, following the Common European Framework of Reference for Languages guideline – the proficiency of the speaker is categorized into one of A1, A2, B1, B2, C1, and C2: A1 being the most elementary and C2 being the most advanced. In Easy Mode, the level of Muse’s responses is determined at the utterance level. Muse’s response will not exceed a given user’s level.
Besides those two major changes mentioned above, there are several other minor changes to Muse V2, such as improving data engineering, switching to a more advanced hyperparameter search, and implementing the legacy model. Thus, Muse V2 will offer better communication experiences to users and is considered a great milestone to AKA.