Briefly Another day, another AI-scamming chatbot on the Internet.
dead last week Unleash Blenderbot 3a sample chat language, on the web as an experiment and it worked just as you’d expect.
BlenderBot 3 has been quick to assert that Donald Trump is and will be president of the United States after 2024, and has fired anti-Semitic views when asking controversial questions such as shown by Business Insider. In other words, BlenderBot 3 is as prone to spreading fake news and holding biased opinions from racial stereotypes like all language models trained from text taken from the Internet.
— Jeff Horwitz (@JeffHorwitz) August 7, 2022
Meta has warned netizens that its chatbot can make “incorrect or offensive statements,” and it’s maintaining a live view online to gather more data for its experiences. People are encouraged to like or dislike BlenderBot 3’s responses and to notify researchers if they think a particular message is inappropriate, illogical, or rude. The goal is to use this feedback to develop a safer, less toxic, and more effective chatbot in the future.
Google search snippets to stop spreading fake news
The search giant has rolled out an AI model to help make the text boxes that sometimes appear when users type questions in a Google search more accurate.
These descriptions, known as feature snippets, can be useful if people are looking for certain facts. For example, write “How many planets are there in the solar system?” A featured snippet that says “Eight Planets” will appear. Internet users don’t have to click on random web pages and read information to get the answer, featured snippets do it automatically.
But Google’s answers are not always accurate and have sometimes set a specific date for a fictitious event such as the assassination of Abraham Lincoln by the cartoon dog Snoopy, according to to the edge. Google said that its system, and Multitasking Unified Model (MUM .)), should reduce the generation of featured snippets for wrong questions by 40 percent; It probably won’t come up with any text descriptions at all.
“Using the latest AI model, our systems can now understand the idea of consensus, which is when all the high-quality sources on the web agree on the same fact,” explained In a blog post.
Our systems can check snippet callouts (the word or words above the featured snippet in larger font) against other high-quality sources on the web, to see if there is general consensus on that callout, even if the sources use different words or concepts to describe the same thing “.
OpenAI’s DALL-E 2 helped create the Heinz Ketchup ad
Heinz, the US food giant, has teamed up with a creative agency to create an ad using artificial intelligence images generated by OpenAI’s DALL-E 2 model to promote its most famous product: ketchup. The ad is the latest installment in Heinz’s “Draw Ketchup” campaign, but instead of turning to humans for their sketches, Canadian advertising agency Rethink consulted machines.
“So, like many of the summaries, the mission was to show Heinz’s iconic role in today’s pop culture,” Mike Dobrik, executive creative director of Rethink, said. Tell Drum this week. “Bringing the idea out to the brand was next. After the brief, we rarely wait until the formal presentation when we’ve shared something we think is cool.”
The end result is a smart ad with a clear and simple message: Due to the many types of text messages containing the word “ketchup,” DALL-E 2 will create something unmistakably similar to a bottle of Heinz. In other words, hoist the company slogan “You should be Heinz”. You can see the ad below.
DALL-E 2 also recently helped an artist craft a file The magazine cover Cosmopolitan It is another example of how text-to-image tools can be used commercially in the creative industries. ®