Skip to main content

AI images on Instagram; Meta chatbot plans create privacy and misinformation fears

Meta seems to be planning to get into artificial intelligence in a big way – with AI images on Instagram, and chatbots aiming to gather more personal data from users …

AI images on Instagram

Developer and reverse engineer Alessandro Paluzzi found code in the Instagram app intended to label AI images.

The creator or Meta said that this content was created or edited with Al.
Image generated by Meta Al
What is generative Al?
People use Al tools to create text, images and video from single descriptions.
How to know when posts use Al
Content created with Al is typically labeled so that it can be easily identified.

Meta declined to comment to Engadget, but CEO Mark Zuckerberg did drop some pretty big hints in last week’s earnings call.

You can imagine lots of ways AI could help people connect and express themselves in our apps: creative tools that make it easier and more fun to share content, agents that act as assistants, coaches, or that can help you interact with businesses and creators, and more.

The above image is a photo by Benyamin Bohlouli that was expanded horizontally using Photoshop’s beta generative AI feature.

Meta chatbots

Zuckerberg made it particularly clear that we can expect chatbots (aka “agents that act as assistants”), and the Financial Times has more details.

Three people with knowledge of the plans [said] some of the chatbots, which staffers have dubbed “personas”, take the form of different characters.

The company has explored launching one that speaks like Abraham Lincoln and another that advises on travel options in the style of a surfer, according to a person with knowledge of the plans.

It says that the first of these may launch in September.

Chatbot privacy and misinformation fears

While they might sound fun, experts say the company will likely use them to gather more personal data about users – in order to then serve more personalized ads.

“Once users interact with a chatbot, it really exposes much more of their data to the company, so that the company can do anything they want with that data,” said Ravit Dotan, an AI ethics adviser and co-founder of the Collaborative AI Responsibility lab at the University of Pittsburgh. 

Another fear is the known phenomenon of generative AI chatbots simply making up fake facts, which people then assume to be real.

Meta will probably draw scrutiny from experts policing the chatbots for signs of bias, or the risk that they share dangerous material or misinformation, known as “hallucinations”. 

Previous bots created by the company did indeed quickly start spreading misinformation.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear