The Gist

  • Diversity in AI? Levis Strauss' recently announced partnership with receives harsh backlash, strategy called "racist" and "lazy."
  • Pausing AI development: More than 1,400 (at the moment) notable researchers, tech leaders and AI experts call for all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months.
  • Superhuman AI. Experiment demonstrates AI completes tasks that would take many humans hours or even days to accomplish.

We should all agree that diversity in advertising and marketing is important. It’s always a good thing when the models used in branding reflect the consumers of that brand. Which is probably why Levis Strauss recently announced plans to diversify their “human models in terms of size and body type, age and skin color."

Great news. But it didn’t work out so great for Levis Strauss.

The idea was on target. Typically, shoppers who visit will see one model used for one product, and the company simply wanted to “enable customers to see our products on more models that look like themselves, creating a more personal and inclusive shopping experience.” But in announcing a new partnership with, it soon became clear the company intended to help bring this idea to fruition by “supplementing” human models with AI-generated fashion models.

Welp ... the concept of incorporating diversity though AI, instead of real diverse human models did not go over well, and the backlash was swift, with many angry Twitter users calling the move “racist,” “lazy” and “fake.”

Levis Strauss responded to the criticism in an editor’s note posted to its website on March 28, admitting, “our recent announcement of a partnership with did not properly represent certain aspects of the program. For that, we take responsibility.”

“We do not see this pilot as a means to advance diversity or as a substitute for the real action that must be taken to deliver on our diversity, equity and inclusion goals and it should not have been portrayed as such,” said company officials. “That being said, we are not scaling back our plans for live photo shoots, the use of live models, or our commitment to working with diverse models ... The partnership may deliver some business efficiencies that provide consumers with a better sense of what a given product looks like but should not have been conflated with the company’s diversity, equity and inclusion commitment or strategy.”

Ok then.

In other AI news...

AI Scientists Call for Pause On 'Giant' AI Model Development

A group of more than 1,400 artificial intelligence (AI) researchers and scientists has signed an open letter calling for a pause on the development of "giant" AI models that require significant amounts of energy and computational resources to train. Published by the Future of Life Institute, an organization primarily funded by the Musk Foundation, the letter calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

Signatories include Twitter CEO Elon Musk and Apple co-founder, Steve Wozniak, along with researchers and educators from various universities, including Duke and Harvard, and CEOs from Stability AI, Pinterest and Getty Images, along with many other high-profile companies and notable figures. Those notably absent from the list include CEOs Sam Altman at OpenAI, Sundar Pichai at Alphabet and Satya Nadella at Microsoft.

The letter argues that these models have significant negative environmental impacts, and that the AI research community needs to consider the consequences of its work, "The trend towards ever-larger AI models has real-world consequences, including exacerbating climate change through massive energy consumption … Without deliberate intervention, the AI research community risks losing sight of the concrete problems our field aims to solve." The signatories call for a shift toward developing more sustainable and environmentally friendly AI models and encourage researchers to consider the ethical implications of their work.

In response to the letter, Timnit Gebru, co-founder and executive directer of the Distributed AI Research Institute and co-founder of the advocacy group Black in AI, said, "It's long past time to critically examine the environmental impact of AI research and development, and make the hard decisions necessary to ensure that we are not contributing to the climate crisis … We must create a new paradigm of research and development that is rooted in principles of sustainability, equity, and accountability."

Related Article: Adobe Launches New GenAI, New NVIDIA AI Partnership, More News

OpenAI's Altman Discusses AI

Meanwhile, a top OpenAI official did discuss the impact of AI on the world and the need for collaboration. In an interview with Kara Swisher, OpenAI CEO Sam Altman discussed the current state and future of artificial intelligence. Here are three insights he shared:

  • Responsible AI. AI is transforming industries and society at an unprecedented rate, and it is critical to ensure that this transformation is inclusive and beneficial to everyone. Altman stressed the importance of developing AI in a responsible and ethical manner, with a focus on creating value for all people.
  • Collaboration with government. Altman believes that the key to advancing AI lies in collaboration and sharing of knowledge and resources. He emphasized the need for companies, governments and individuals to work together to accelerate progress and ensure that AI is used for the benefit of all.
  • Risks, yes. Rewards, yes. Altman acknowledged the potential risks and challenges posed by AI, such as job displacement and algorithmic bias, but he also expressed optimism about the potential benefits, such as improved healthcare, education and environmental sustainability. He emphasized the importance of taking a proactive approach to addressing these risks and maximizing the benefits of AI.

'Superhuman' AI Completes Tasks of Marketing Strategist and Social Media Manager

Ethan Mollick, a professor at the University of Pennsylvania's Wharton School, recently conducted an experiment in what powerful generative AI language models like GPT-3 can achieve in 30 minutes and documented the outcomes in his blog, One Useful Thing, where he described the results as "superhuman."

Completing tasks commonly done by a marketing strategist, a site developer and a social media manager, Mollick found the AI tools could provide insights that would take many humans hours or even days to accomplish.

“I gave myself 30 minutes and tried to accomplish as much as I could during that time on a single business project. At the end of 30 minutes, I would stop,” Mollick said. “In 30 minutes, it: did market research, created a positioning document, wrote an email campaign, created a website, created a logo and “hero shot” graphic, made a social media campaign for multiple platforms, and scripted and created a video.”

Learning Opportunities

Mollick's test highlighted the fact that AI tech can significantly enhance productivity and create personalized experiences that can help users work faster and more efficiently, reducing the time spent on mundane tasks and freeing up time for more important work.

“The key is that I was able to do this using the tools available today, without any specific technical knowledge, and in plain English prompts,” Mollick said. “I just asked for what I wanted, and the AI provided it. That means almost everyone else can do it, too. We are already in a world of superhumans, we just have to wait for the implications.”

Hey, Good Looking: Research Finds Appearance Affects AI Credibility

At the University of Cambridge, researchers found that the effectiveness of using robots as mental wellbeing coaches in the workplace depends largely on the appearance of the robot.

The experiment was conducted in a tech consultancy firm using two different robot wellbeing coaches with identical voices, facial expressions and scripts. However, each had a different appearance. One was a “toy-like robot” and the other was more of a “a humanoid-like robot.”

"We interviewed different wellbeing coaches and then we programmed our robots to have a coach-like personality, with high openness and conscientiousness," said co-author Minja Axelsson. "The robots were programmed to have the same personality, the same facial expressions and the same voice, so the only difference between them was the physical robot form."

Researchers found that participants overwhelmingly preferred the toy-like robot, something they attribute to the fact that they had lower expectations of the “toy” and found it “easier to talk with.” Meanwhile, they seem to have expected more from the humanoid robot and felt it didn’t meet their expectations.

The main finding, according to Axelsson, is that perception and expectation are intertwined with appearance and “perceptions of how robots should look or behave might be holding back the uptake of robotics in areas where they can be useful.”

Related Article: GPT-4 Is Here, Microsoft Gives Its AI Ethics Team the Boot, More AI News

Microsoft Announces Security Copilot

Microsoft has unveiled a new security tool called Security Copilot that uses artificial intelligence (AI) to help security analysts identify and respond to threats faster. Security Copilot provides a collaborative workspace where security teams can share information and insights in real-time, enabling them to work together more efficiently.

Powered by AI and machine learning algorithms that can detect patterns and anomalies in data, the tool is enabled to quickly identify potential threats. Security Copilot also integrates with other Microsoft security tools, such as Microsoft Defender and Azure Sentinel.

AI Video of the Week

Ilya Sutskever, chief scientist of OpenAI, discusses spies, enlightenment and AI that is smarter than us, in this week’s video pick.

AI Tweet of the Week