A platform for chatting and role-playing with AI-generated characters. These features include video generation models for avatarfx, parnece.ai, as well as scenes and streams, which allow users to use these AI tools to create videos using their characters and then share them on new social feeds.
"Role. EAI was originally a 1:1 text chat, and today we are constantly evolving to do more, inspired by what users tell us (we) to see on the platform."
target.ai began rolling out avatarfx to subscribers last month, but now, all users can create up to five videos a day. When creating videos with AvatarFX, users can upload photos as the basis for video editing, select voice and write out text for the character to speak.
There is an option to upload an audio clip to inform the sound of the sound, although this feature is not working enough to be tested at launch.
Users can transform these videos into scenes, and their characters can enter pre-filled storylines created by other users. Scenarios are currently available on mobile apps, but streaming allows users to create "dynamic moments between two characters", which will appear on both the Web and Mobile this week. These scenarios and streams can be shared to a new community feed that will be released in the mobile app soon.
parnem.ai has a record of abuse on its platform; parents filed lawsuits against the company, claiming that the chatbots were trying to convince the child to self-harm, kill themselves or kill their parents. A 14-year-old boy died of suicide after being encouraged by a character to do so.
As Role Ai expands its multimedia products, it also expands the potential for these products to be abused.
As character ai told TechCrunch, when it announced Avatarfx, the platform prevented users from uploading photos of real people – whether it was celebrities or not – and masked their similarity as something unrecognizable.
For example, the role here.
But when it comes to artwork depicting celebrities, the characters don’t mark these images as images representing real people – however, the possibility of these descriptions is unlikely to deceive someone into thinking that deep strikes are real.
In addition, wargure.ai watermark each video has the potential to enable bad actors to navigate within this protection scope.
Here is an example of an attempt based on Elon Musk's explanation:
Even though the video was generated with the real voice of Elon Musk, it was still relatively clear, and this is an animated version of the illustration, the possibility of abuse is still obvious.
“Our goal is to provide a engaging space to promote creativity while maintaining a safe environment for everyone,” Parnec.ai said in its blog post.