The Gist
- Do content stuff for me, AI. OASIS is a new app that can turn jumbled thoughts into a polished, clear message using GPT-3 technology.
- Oh, Bard. Google's newly announced chatbot Bard made a $100 billion mistake, which resulted in a 9% decrease in Alphabet's stock.
- Surprise: more generative AI. Runway Research has unveiled Gen-1, a generative AI system that uses language and images to generate videos in various styles, including stylization, storyboard, mask, render and customization.
AI is revolutionizing the world, from computer vision enabling self-driving cars and facial recognition systems and Generative Adversarial Networks (GANs) that can generate images or music based on a set of inputs to Natural Language Processing (NLP) advancements and chatbots that understand and respond to human language.
Today, our new column delves into the latest AI advancements, including cutting-edge tech from Microsoft Bing, Google Bard, OpenAI's ChatGPT — and more. We'll examine boundary-pushing innovations and explore how AI is shaping our world — and what the people inside of that world are talking about and how they're using it.
So here we go.
OASIS Offers AI Cure for Jumbled Thoughts
Besieged by a case of word vomit? It’s a common occurrence that preempts your inability to effectively outline your thoughts into a cohesive communication. If your thoughts make sense in your head but you’re unable to clearly articulate them coherently, OASIS, a new app, introduced in beta on Feb. 12, promises to “turn your jumbled thoughts into a polished, clear message, powered by the cutting-edge technology of GPT-3." Or, put less eloquently, the developers ask you to “say what's on your mind and let OASIS transform your verbal vomit into a crisp, compelling message.”
I tried it out — “verbally vomiting” some ideas I have for a new book — and in seconds the app created a very coherent, organized and readable, blog post, outline, Twitter thread, professional email, text message and a LinkedIn post — all based on my ramble. Further, it offered an “Explain Like I’m Five” option and — “Orange Man Speak” (apparently written in the style of former President Trump.)
If you’d like to give it a try — sign up for OASIS AI beta.
In other AI news...
Google's $100 Billion Gaffe
It appears that Google's newly announced — but not yet ready for primetime — chatbot Bard, made a $100 billion mistake, and Google employees are reportedly not happy.
CNBC reported Feb. 10 that Googlers have taken to the internal meme generator Memegen, to criticize CEO Sundar Pichai, for his handling of the Bard announcement. They called it “rushed,” “botched” and “un-Googley,” according to messages and memes viewed by the news outlet.
Google first announced Bard on Feb. 6, and in an advertisement touting Bard’s capabilities, shared a demo from Google’s official Twitter account that same day. When asked “What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year-old about?” Bard responded that JWST took the “very first pictures” of an exoplanet outside our solar system.
However, according to NASA, the correct answer is the European Southern Observatory’s Very Large Telescope; a badly timed error, first spotted by astrophysicist Grant Tremblay the next day. Actually it was extremely bad timing, because it was the same day the company hosted an impromptu presser to reveal Google’s latest AI tech updates — an event that left some feeling underwhelmed — especially following Microsoft’s launch of its AI-powered Bing search engine a day earlier.
By market close on Feb. 8, Alphabet’s stock fell by 9% — losing $100 billion in market value.
Related Article: Google Announces ChatGPT Rival 'Bard,' the AI-Powered Search
Tweet of the Day: 'I Admire His Enthusiasm'
People everywhere are trying out Microsoft’s new AI-infused Bing — and feverishly posting their interactions across Twitter and Reddit, often with hilarious, and sometimes terrifying, results. One user got chastised when he suggested he shared confidential information related to Bing’s rules on Twitter, and another found that (when prompted) it will share a pretty insulting joke about men. But it would not joke about women because “it would be disrespectful and sexist.”
But today’s Tweet of the day is from Scott Hanselman, who asked Bing, “What are your thoughts about Scott Hanselman being on Twitter?” Bing’s reply was...extensive. It was also personal — with Bing sharing its “thoughts” and its own “admiration.”
This new @bing is craaaaaazy pic.twitter.com/ih4mE1dZd0
— Scott Hanselman (@shanselman) February 13, 2023
Generative-AI for Video
Runway Research has unveiled Gen-1, a generative AI system, that uses language and images to generate videos in a variety of styles — even Claymation.
In an explainer video on their website, company officials say the tech can “realistically and consistently apply the composition and style of an image or text prompts to the target video, allowing you to generate new video content using an existing video” within five modes.
- Stylization: Transfer the style of any image or prompt to every frame of video.
- Storyboard: Turn mockups into fully stylized and animated renders.
- Mask: Isolate subjects in your video and modify them with simple text prompts.
- Render: Turn untextured renders into realistic outputs by applying an input image or prompt.
- Customization: Customizing the model for higher fidelity results.
In collaboration with Stability AI, Runway provided foundational research for Stable Diffusion, an open-source AI model — and in December, Runway announced it landed $50 million in Series C led by Felicis, that now reportedly puts the company at $500 million valuation.
Video of the Day: Tom Scott, Everything Is About to Change
“It’s not about ChatGPT, it's about what it represents.”
In the midst of an email issue, British YouTuber Tom Scott was thrown into an existential AI crisis — a crisis he documented in a viral YouTube video — accumulating 1.8 million views within a day.
Exploring AI’s recent developments like ChatGPT, through the lens of the sigmoid curve (a cycle of learning, growth, and eventually — decline), Scott questions where we are on the curve. And if we’re at the precipice, then he believes, “everything is about to change just as fast and just as strangely as it did in the early 2000s. Perhaps beyond all recognition.”
Are AI-Generated Articles Acceptable?
In January, tech news site CNET had a little explaining to do when they were called out for using "automation technology" to create more than 70 articles published under the “CNET Money Staff” moniker. Despite the fact that each article came with a dropdown note that read “This article was generated using automation technology ... and thoroughly edited and fact-checked by an editor on our editorial staff," the outlet was blasted for not making a more notable, official announcement of their plan.
“In November, one of our editorial teams, CNET Money, launched a test using an internally designed AI engine — not ChatGPT — to help editors create a set of basic explainers around financial services topics. We started small and published 77 short stories using the tool, about 1% of the total content published on our site during the same period,” CNET Editor-in-Chief Connie Guglielmo said in a post to CNET. “After one of the AI-assisted stories was cited, rightly, for factual errors, the CNET Money editorial team did a full audit.”
Guglielmo has since paused the use of AI to generate stories until “we feel confident the tool and our editorial processes will prevent both human and AI errors.” Further, she said the outlet added “additional steps to flag potential misinformation” and promised to offer more byline transparency.
Did ChatGPT Earn an MBA?
A professor at the University of Pennsylvania’s Wharton School said his research shows ChatGPT would have received a B to B- grade on the final exam of a typical MBA core course.
And when professors from the University of Minnesota put ChatGPT through a law school simulation, the AI chatbot passed, achieving a C+ average.
“It is becoming increasingly likely that in the near future many lawyers will need to collaborate with AIs, like ChatGPT, both to save time and money and to improve the quality of their work product,” law professor Daniel Schwarcz said in an article posted to the University of Minnesota website.
Learning Opportunities
In January, Joshua Browder, CEO of DoNotPay, announced he would be using an AI “robot” powered by ChatGPT to argue a traffic ticket case in court. But the move was nixed after prosecutors threatened legal action against Browder if he proceeded.
Related Article: ChatGPT: What You Need to Know
OpenAI Releases 'Unreliable' New Tech
OpenAI has announced a new program called “Classifier," tech that works to distinguish text written by a human from text written by AI. But the company readily admits it’s not “fully reliable.”
“While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that AI-generated text was written by a human: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human,” company officials said in a website statement.
In testing, Classifier correctly identified 26% of AI-written text as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time.
“Our classifier’s reliability typically improves as the length of the input text increases. Compared to our previously released classifier, this new classifier is significantly more reliable on text from more recent AI systems,” said company officials. “We’re making this classifier publicly available to get feedback on whether imperfect tools like this one are useful. Our work on the detection of AI-generated text will continue, and we hope to share improved methods in the future.”
Voice Cloning Chaos
Following an infusion of $2 million in pre-seed funding led by Credo Ventures in late January, ElevenLabs, an AI voice technology startup that creates lifelike speech synthesis tools, announced the launch of its Beta platform with “promising to revolutionize storytelling.”
“Our ultimate goal is to let people enjoy any content they find relevant and interesting, regardless of what language they speak," Piotr Dabkowski, co-founder of ElevenLabs, said in a statement.
But just over a week later, ElevenLabs took to Twitter, thanking everyone for trying the Beta platform, but noting “an increasing number of voice cloning misuse cases.”
While they didn’t get into specifics, the tech magazine, Motherboard, found members of the anonymous website forums known as 4chan, “used the product to generate voices that sound like Joe Rogan, Ben Shapiro and Emma Watson to spew racist and other sorts of material.”
In response, the British AI firm tweeted, it was “implementing additional safeguards.”
Microsoft CTO Picks Top Three AI Advancements
Kevin Scott, Microsoft’s chief technology officer, recently shared the three AI advancements he’s most impressed with. They include:
- GitHub CoPilot: According to Scott, GitHub Copilot, a large language model-based system that turns natural language prompts into code, “has this dramatic positive impact on developer productivity.”
- DALL∙E 2: Generative image models such as DALL∙E 2. While he admits that AI systems like DALL∙E 2 don’t turn ordinary people into professional artists, they do give “a ton of people a visual vocabulary that they didn’t have before — a new superpower they didn’t think they would ever have.”
- Protein Folding: Scott is proud of the work the company’s done with David Baker’s laboratory harnessing machine learning for protein design at the Institute for Protein Design at the University of Washington.
Magic Lands $23M
Magic, a start-up code-generating platform company, has announced a $23 million Series A led by CapitalG (Alphabet’s Independent Growth Fund).
Magic’s CEO and co-founder, Eric Steinberger, told CMSWire that the funding will go towards developing an AI software engineer.
"For decades, technology has been a tool. Soon, it will be a colleague," Steinberger said. "Magic aims to build an AI software engineer to work alongside human engineers, helping them write, review, debug and maintain code."
SCALE AI Announces $117M in investments
SCALE AI, a Canadian-based co-investment and AI innovation hub with government funding matched by contributions from the private sector, has announced what it refers to as the most significant financing round to date for the organization since its creation with $117 million in investments, supporting 15 AI projects, including six already supported by SCALE AI and nine new projects.
According to company officials, since 2019, SCALE AI has supported more than 90 industry projects, with investments totaling half a billion dollars, 62% of which was funded by the industry. These Canadian companies own 100% of the intellectual property (IP) generated and funded in their SCALE AI projects.
"Over the past years, we have seen an increasing number of Canadian companies develop AI practices and reap the benefits,” Julien Billot, CEO of SCALE AI, said in a statement. “2022 was a record-breaking year for the organization with a remarkable total investment of $204 million in 33 industry projects.”
The nine new projects receiving investments from $97 million, include McCain Driving Impact, Coveo, Foxfire Labs, GUAY inc., Macrodyne Technologies, Bombardier, Routeique, Canam Group Inc., and BIM Track. The six projects to each receive an additional investment from $20 million include Canadian Tire, Bombardier, Kemira, BRP, Plusgrade, and Ray-Mont Logistics.
Have a tip to share with our editorial team? Drop us a line: