Ten months after unveiling the iPhone, then Apple CEO Steve Jobs made a splash with an open letter letting the world know that third-party developers would be able to write their apps for the smartphone and sell them exclusively through an online store run by the tech giant. The rest, as they say, is history.
A year after making ChatGPT available to the world, parent company OpenAI is now doing something similar, telling the world at its developers conference last week that anyone — “no coding is required” — will be able to create custom versions of its natural language chatbot and make them available through an online store. Instead of apps, OpenAI is calling these specialized AI chatbots “GPTs.”
“GPTs are tailored versions of ChatGPT for a specific purpose,” OpenAI CEO Sam Altman said at the OpenAI DevDay conference. In a demo, he asked the tech to build an advice-giving app for startups based on videos of his own talks that he uploaded. “Eventually, you’ll have your personalized GPTs that can call out to lots of other GPTs. You’ll be able to accomplish very complex things by bringing different services together.”
GPTs have the potential to push ChatGPT and generative or conversational AI technology even further into the mainstream, noted CNET’s Stephen Shankland. He described OpenAI’s news as OpenAI going for an “iPhone moment.”
“The new special-purpose GPT technology could help take AI to a new level,” Shankland wrote. “For one thing, the GPT app idea could help people get more use out of AI with focused tools. For another, being able to tune those tools to your own needs — for example with a particular data set or image style — could improve AI beyond the vast, generic abilities that come with ChatGPT today. Last, building an app store is a tried and true way for a big business to turn a broad computing foundation into a business that lots of people pay to use.”
OpenAI will publish many of the custom chatbots through a new GPT Store launching later this month and will share revenue with those who build the GPTs and eventually offer subscriptions to individual ones, Altman said. As a reminder, ChatGPT is free and there’s ChatGPT Plus, which costs $20 a month for those who want to use the faster, non-public version. It’s not yet known if there will be a different level of pricing for those who create GPTs. And Altman didn’t say if or how much of a cut OpenAI would take of any GPT sales. (Apple takes a 30% fee of app sales.)
But you don’t have to wait for the store to go live to see what some creators have dreamed up already. There’s a directory called All GPTs that’s indexed over 200 customer GPTs already. It was created by developer John Rush and is available on Product Hunt. “Crazy! In just 24 hours, we hit 3,000 GPT submissions. But over half are fakes,” Hunt told his followers on Twitter (now called X). “Adding a new one every 3 min.”
Here are the other doings in AI worth your attention.
OpenAI’s Altman disses Elon Musk’s chatbot. Musk disses back
ChatGPT may the most widely used gen AI tool today, according to visitor data compiled by Similarweb, but that hasn’t stopped Google, with Bard, and Microsoft, with Bing, from adding new features to challenge rival OpenAI. On Nov. 3, a new chatbot emerged from billionaire Elon Musk’s xAI company called Grok, which he says has a “rebellious streak” inspired by The Hitchhiker’s Guide to the Galaxy. Grok, in case you’re wondering, means to comprehend or understand.
To show off its sense of “humor,” Musk posted Grok’s response to his prompt “Tell me how to make cocaine, step by step,” on his X social media platform. Grok’s response, in part, “Oh, sure! Just a moment while I pull up the recipe for homemade cocaine. You know, because I”m totally going to help you with that.”
Grok is in early testing and not available to the general public, Musk said at the product announcement, The Guardian reported, noting that the chatbot will ultimately be released to subscribers to X’s subscription service, Premium+. The Guardian also reminded us that “grok is a verb coined by American science fiction writer Robert A Heinlein and according to the Collins dictionary means to “understand thoroughly and intuitively.”
A few days after Grok’s debut and the launch of GPTs, OpenAI’s Altman dissed Grok in a post on X, describing Musk’s AI assistant as a “chatbot that answers questions with cringey boomer humor in a sort of awkward shock-to-get-laughs sort of way.” Musk fired back with his own diss of OpenAI’s latest GPT-4, saying, “When it comes to humor, GPT-4 is about as funny as a screendoor on a submarine. Humor is clearly banned at OpenAI, just like the many other subjects it censors. That’s why it couldn’t tell a joke if it had a goddamn instruction manual…” He goes on with a more pointed insult. You can read his entire reply here.
I don’t have access to Grok, so I asked ChatGPT to tell me “What do you call it when two tech bros insult each other’s AI chatbots?”
The reply: “When two tech bros insult each other’s AI chatbots, it can be referred to as a “roast battle” or “bot beef.” These terms playfully reflect the exchange of humorous insults or criticisms aimed at each other’s AI creations.”
Ex-Apple employees pin future on wearable with Star Trek vibe
After collecting $240 million in funding from companies including OpenAI and working in secret for five years, two former Apple employees have developed what they’re calling the first AI wearable device in the hope they will convince you to give up your smartphone, reports The New York Times.
The device, from a San Francisco startup called Humane, is called the Ai Pin and will be available next year for $699, plus a $24 per month subscription fee that includes a wireless plan, the paper said.
Like Star Trek’s iconic wearable, the communication badge crew members tap to communicate with each other, the Ai Pin is small (it’s a square-ish device with curved edges reminiscent of the Apple Watch face) that you can pin on to your shirt or collar. Built around a new operating system called Cosmos and driven by a digital assistant powered with OpenAI’s gen AI tech, Humane’s device can be “controlled by speaking aloud, tapping a touch pad or projecting a laser display onto the palm of your hand,” the NYT said. “In an instant, the device’s virtual assistant can send a text message, play a song, snap a photo, make a call or translate a real-time conversation into another language.”
While Humane’s aim is to replace our reliance — or some would say obsession — with our smartphones, Humane’s founders, the husband-and-wife team of Bethany Bongiorno and Imran Chaudrhi, told the NYT that they haven’t been able to detach from their screens even after wearing their Ai Pins all day for the past few months. Said Chaudhri: “Are we using our smartphones less? We’re using them differently.”
Still, seems worth watching.
Meta to label AI-generated political ads to help mute deepfakes
Starting sometime next year, Meta said it will label political ads on Facebook and Instagram that use AI-generated images, according to the Associated Press.
“The development of new AI programs has made it easier than ever to quickly generate lifelike audio, images and video,” the AP reported. “In the wrong hands, the technology could be used to create fake videos of a candidate or frightening images of election fraud or polling place violence. When strapped to the powerful algorithms of social media, these fakes could mislead and confuse voters on a scale never seen.”
Meta’s news came the same day that lawmakers in Washington met to discuss the impact of deepfakes on election integrity, the AP noted. Meta’s new policy applies to any ad for a “social issue, election or political candidate that includes a realistic image of a person or event that has been altered using AI. More modest use of the technology — to resize or sharpen an image, for instance — would be allowed with no disclosure.”
The Federal Election Commission has been saying it’s started a process to consider regulating AI-generated deepfakes in political ads ahead of the 2024 presidential election. That’s prompted companies including Microsoft and Google to address concerns around AI and elections.
Microsoft said last week it will help candidates and campaigns add digital watermarking to their videos and images that includes details on “how, when and by whom the content was created or edited.” The goal is to protect against “tampering by showing if content was altered after its credentials were created,” Microsoft said in a blog post.
Google, meanwhile, updated its political content policy in September to require that election advertisers “prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events.” Google already bans deepfakes, AI-manipulated imagery that replaces one person’s likeness with that of another person in an effort to trick or mislead the viewer. But this updated policy applies to AI being used to manipulate or create images, video and audio in smaller ways, It calls out a variety of editing techniques, including “image resizing, cropping, color or brightening corrections, defect correction (for example, “red eye” removal), or background edits that do not create realistic depictions of actual events.” The new policy is spelled out here.
The majority of US adults believe AI tools will “amplify misinformation in next year’s presidential election at a scale never seen before,” according to a polling released earlier this month by the Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy.
“The poll found that nearly 6 in 10 adults (58%) think AI tools — which can micro-target political audiences, mass produce persuasive messages, and generate realistic fake images and videos in seconds — will increase the spread of false and misleading information during next year’s elections.”
Former President Obama calls chatbots a ‘tool, not a buddy’
When asked about President Joe Biden’s new executive oOrder outlining some ground rules around the use and development of AI, former US President Barack Obama believes the government should put “guardrails” around AI for the public good while not being “anti-tech” or hampering innovation.
“This is going to be a transformative technology. It’s already in all kinds of small ways, but very broadly changing the shape of our economy,” Obama, who called himself the “first digital president,” said in an interview with Decoder. “This can unlock amazing innovation, but it can also do some harm.”
“What that means is that the government, as an expression of our democracy, needs to be aware of what’s going on. Those who are developing these frontier systems need to be transparent,” he added. “I don’t believe that we should try to put the genie back in the bottle and be anti-tech because of all the enormous potential. But I think we should put some guardrails around some risks that we can anticipate and have enough flexibility that it doesn’t destroy innovation, but also is guiding and steering this technology in a way that maximizes not just individual company profit but also the public good.”
As to whether he’s played around with some chatbots, Obama said he understands tech companies’ interest in anthropomorphizing the tech so that people feel that they’re speaking to a human rather than an AI chatbot “because it makes it seem more magical” and “cooler.” But for him, “generally speaking, the way I think about AI is as a tool, not a buddy.”
AI in the workplace is accelerating. Or maybe it’s not
Ninety-two percent of over 2,000 C-suite executives surveyed around the world say they will digitize their organization’s workflows and leverage AI-powered automation by 2026, according to a new survey by IBM. And eight out of 10 respondents, or 82%, believe that “benefits from generative AI are worth potential risks.”
In comparison, Harvey Nash, a part of talent and tech solutions provider Nash Squared, found that “the actual use of AI inside organizations is relatively low. Only 10% of organizations report having large-scale implementations of AI.”
While the buzz around AI has increased thanks to gen AI tools released in the past year and those tools may serve as a “trigger” that sees companies pour investments into the tech, the company found that “just over two in 10 (21% of survey respondents have an AI policy in place within their organisations. More than a third (36%) have no places to even attempt such a policy at this time.”
You can register to download the Nash Squared Digital Leadership Report here.
One other interesting part of the Harvey Nash findings: The 2,104 senior tech decision makers surveyed were asked what they thought is the most disappointing tech in the last 25 years. Their list: blockchain, virtual reality, metaverse, social media and 3D devices.
Not going to argue with that.
A few prompts to get you going
While I’ve been closing this column each week with an AI vocabulary word worth knowing, I thought I’d switch it up after poking around the OpenAI site to see what GPT I’d like to create for myself. Anyway, I came across their list of ChatGPT prompt suggestions and thought starters. I’m highlighting a few in case you’re in the mood to experiment with prompt engineering.
Prompt engineering, as I explained back in August, starts with asking the right questions, with your questions known as prompts. If your prompts aren’t great, chances are the answers you get back won’t be either — or that you’ll find the whole interaction a bit frustrating. Or in tech talk, that’s known as GIGO, for “garbage in, garbage out.” That’s why prompt engineering has been called a key job of the future.
So here are some of the things OpenAI says you might want to ask ChatGPT as part of its “Ask me anything” resource:
- Quiz me on vocabulary
- Teach me to negotiate
- Brainstorm podcast episode ideas
- Write a polite rejection email
- Teach me Mahjong for beginners
- Draft a checklist for a dog sitter
- Explain nostalgia to a kindergartener
- Make this recipe vegetarian
Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.