News

The Instagram Founders’ News App Artifact Is Actually an AI Play

The Instagram Founders’ News App Artifact Is Actually an AI Play
Written by Techbot

The invasion of chatbots has disrupted the plans of countless businesses, including some that had been working on that very technology for years (looking at you, Google). But not Artifact, the news discovery app created by Instagram cofounders Kevin Systrom and Mike Krieger. When I talked to Systrom this week about his startup—a much-anticipated follow-up to the billion-user social network that’s been propping up Meta for the past few years—he was emphatic that Artifact is a product of the recent AI revolution, even though it was devised before GPT began its chatting. In fact, Systrom says that he and Krieger started with the idea of exploiting the powers of machine learning—and then ended up with a news app after scrounging around for a serious problem that AI could help solve.

That problem is the difficulty of finding individually relevant, high-quality news articles—the ones people most want to see—and not having to wade through irrelevant clickbait, misleading partisan cant, and low-calorie distractions to get those stories. Artifact delivers what looks like a standard feed containing links to news stories, with headlines and descriptive snippets. But unlike the links displayed on Twitter, Facebook, and other social media, what determines the selection and ranking is not who is suggesting them, but the content of the stories themselves. Ideally, the content each user wants to see, from publications vetted for reliability.

News app Artifact can now use AI technology to rewrite headlines users have flagged as misleading.

Courtesy of Nokto

What makes that possible, Systrom tells me, is his small team’s commitment to the AI transformation. While Artifact doesn’t converse with users like ChatGPT—at least not yet—the app exploits a homegrown large language model of its own that’s instrumental in choosing what news article each individual sees. Under the hood, Artifact digests news articles so that their content can be represented by a long string of numbers.

By comparing those numerical hashes of available news stories to the ones that a given user has shown preference for (by their clicks, reading time, or stated desire to see stuff on a given topic), Artifact provides a collection of stories tailored to a unique human being. “The advent of these large language models allow us to summarize content into these numbers, and then allows us to find matches for you much more efficiently than you would have in the past,” says Systrom. “The difference between us and GPT or Bard is that we’re not generating text, but understanding it.”

That doesn’t mean that Artifact has ignored the recent boom in AI that does generate text for users. The startup has a business relationship with OpenAI that provides access to the API for GPT-4, OpenAI’s latest and greatest language model that powers the premium version of ChatGPT. When an Artifact user selects a story, the app offers the option to have the technology summarize the news articles into a few bullet points so users can get the gist of the story before they commit to reading on. (Artifact warns that, since the summary was AI-generated, “it may contain mistakes.”)

Today, Artifact is taking another jump on the generative-AI rocket ship in an attempt to address an annoying problem—clickbaity headlines. The app already offers a way for users to flag clickbait stories, and if multiple people tag an article, Artifact won’t spread it. But, Systrom explains, sometimes the problem isn’t with the story but the headline. It might promise too much, or mislead, or lure the reader into clicking just to find some information that’s held back from the headline. From the publisher’s viewpoint, winning more clicks is a big plus—but it’s frustrating to users, who might feel they have been manipulated.

Systrom and Krieger have created a futuristic way to mitigate this problem. If a user flags a headline as dicey, Artifact will submit the content to GPT-4. The algorithm will then analyze the content of the story and then write its own headline. That more descriptive title will be the one that the user sees in their feed. “Ninety-nine times out of 100, that title is both factual and more clear than the original one that the user is asking about,” says Systrom. That headline is shared only with the complaining user. But if several users report a clickbaity title, all of Artifact’s users will see the AI-generated headline, not the one the publisher provided. Eventually, the system will figure out how to identify and replace offending headlines without user input, Systrom says. (GPT-4 can do that on its own now, but Systrom doesn’t trust it enough to turn the process over to the algorithm.)

I point out to Systrom that this practice may drive publishers batty. After all, they spend tremendous energy brainstorming headlines (as WIRED does in a Slack channel where I don’t dare to venture) and often test multiple versions to see which one draws the most clicks or swipes. Who the hell is Artifact to rewrite the headlines of WIRED, Bleacher Report, or the The New York Times

Systrom says that Artifact will un-bait only a small minority of stories. But he makes no apologies for rewriting the ones that users flag as deceptive. “There’s no rule that says any link to content needs to be the title that someone else decided to show you, because that can be manipulative, or it can be misleading,” he says. I’d argue that there is at least an unspoken rule that third parties shouldn’t mess with the content of stories they link to, and headlines—even clickbaity ones—are indeed content. 

The new feature illustrates how seriously Artifact’s founders take their avowed mission to deliver the most relevant stories to users. But that doesn’t mean that the startup’s ultimate destiny will be limited to improving the quality of news consumption. Remember, from the very start, Systrom and Krieger’s goal was to use AI to solve a problem, not to improve reading habits. So don’t be surprised when Artifact branches out. Indeed, when I asked Systrom whether journalism was merely an entry point for Artifact—in the same way that Amazon began its march to ecommerce dominance through bookselling—his answer was a straightforward yes!

Systrom’s full answer is a good encapsulation of how one of Silicon Valley’s savviest founders sees opportunity in the current AI moment. “In new companies you always start off with something fairly specific, whether it’s Apple and personal computers, or Amazon and books, or Facebook and colleges. You start off with that specific, you build product market fit, and as you gain success, you expand the aperture of your mission. What I care most about is that people should consume what matters most to them, and not what matters most to someone who decides to post it. That can be news articles or music or shopping. But the core tenet here is that machine learning is going to drive the next wave.”

Sounds like an excellent “hed,” as we write in the news business. ChatGPT, what do you think?

Time Travel

Systrom’s idea that Artifact will evolve into something different is rooted in the experience that he and Krieger had with Instagram. The app began as Burbn, designed as a means to know what your friends were doing at that moment. User behavior led to a different direction, a new name, and an acquisition by Facebook. I wrote about Instagram’s origins in my 2020 book, Facebook: The Inside Story.

Over the next few weeks, the Burbn beta testers became a small but loyal community. Underline small. “It wasn’t exactly setting the world on fire,” Krieger would later write in an account of Instagram’s beginnings. “Our attempts at explaining what we were building was often met with blank stares, and we peaked at around 1,000 users.” The founders noted that photo sharing, which was envisioned as a slideshow in the app, seemed to be the most popular feature. Systrom and Krieger decided to rewrite Burbn to concentrate on that aspect. The app, written for the iPhone, would open to a camera, ready to capture and transmit a visual signal to the world that showed not just where you were and who you were with, but who you were. It would be primal, pre-linguistic, and lend itself to endless creativity. The photos would appear in a feed, a constant stream shared by people you chose to “follow.” It also nudged users into a performance mode, as by default any user could see your photos. It was much more Twitter-like than Facebook-ish.

Shifting Burbn into a camera-first app delighted Systrom. He’d always loved photography. He also had an affinity for old, funky things. He was the kind of guy who’d buy an old Victrola and display it as a piece of art. He was also a craftsman at heart; his standards for detail were Jobsian, without the snide insults to those who dared give him work that fell short. He and Krieger would spend hours on the tiniest detail, like getting the rounded corners right on the camera icon. It was the antithesis of “Move Fast and Break Things.”

One of the key breakthroughs on the revamped app came when Systrom was on a Mexican vacation with his girlfriend, Nicole. To his dismay, she told him she would be reluctant to use the product he was building 24/7 because she’d find it hard to match the quality of photos a certain friend of hers took. Systrom told her they looked good because the friend used filters to make the images more intriguing. So Nicole suggested maybe he should use filters in his product. He quickly added a filter to the app and used it the next day when the couple were at a taco stand to take a picture of a puppy with Nicole’s flip-flopped foot in the corner. That was the first picture he posted to the beta version of Burbn’s successor, which came to be called Instagram, a portmanteau of “instant” and “telegram.”

Ask Me One Thing

John asks, “K-12 educators in the USA—and the legislatures and boards that regulate them—are all over the place about what course of action makes the most sense for teachers confronted with AI chatbots. What are your thoughts?”

Thanks, John. The advent of super-smart chatbots is a challenge on two levels. The most basic, and pressing issue is that right now students are using something like Bard, Bing, or ChatGPT to assist with their assignment. This isn’t necessarily so bad, but the time saved might be at the expense of making discoveries while researching. Or they may use a chatbot as a means of plagiarizing, which is outright bad. It’s proving difficult to weed out cheating, but that problem might lead to reforms in teaching methods that ultimately may be beneficial. I’m talking about more personalized examinations where students express their ideas directly to teachers. If there’s no time for that (and there should be), maybe we’re due a return to in-class essays written longhand in those old blue books?

The other level I mentioned is the question of whether ubiquitous chatbots will alter what education itself means. If we can get machines to write for us, does that mean writing becomes less important to learn? The dilemma is similar, though much more fraught, than the questions that arose when search engines began delivering answers to factual questions so easily. Why memorize a historic date, or the Gettysburg Address, when one could instantly access such data?

I do believe that it would be tragic if we deprioritized the ability to organize thoughts, marshal evidence, and express ideas in clear, reasoned prose—just because AI will hereafter always be around to do it for us.  Those mental tools apply not just to education, but how we lead our lives. That’s why the best educators will alter the nature of their pedagogy to make sure that students will be able to soar on their own brainpower.  Those well-schooled learners will make better thinkers, better problem solvers, and better partners to their persistent AI companions.

You can submit questions to mail@wired.com. Write ASK LEVY in the subject line.

End Times Chronicle

The air quality in Philadelphia is lousy—because of wildfires in Nova Scotia.

Last but Not Least

Lots of AI scientists are warning that the work they are doing, and will keep doing, is a major threat to humanity. 

The case of the exploding hand sanitizer. You may never disinfect again.

An AI chatbot gets fired after recommending weight loss to patients with eating disorders.

Meanwhile, the CEO of WeightWatchers—who recommends weight loss more selectively—navigates a world of body positivity and Ozempic. But first, dessert.

Don’t miss future subscriber-only editions of this column. Subscribe to WIRED (50% off for Plaintext readers) today.

Original Article:

About the author

Techbot