Our Recent Posts


Artificial Intelligence is completely reinventing media and marketing. The results are much weirder

When artificial intelligence is fully operational, it will transform the media and marketing industries. In particular, I believe that synthetic personalities powered by AI will change the way we learn about new products and how to use them.

In my previous article, I showed how the collapse of broadcast TV exposed a huge weakness in the advertising industry. And I pointed to the nascent field known as Influencer Media, and especially Virtual Influencers, as a harbinger of the future of engagement brand-building.

But that’s just the beginning.

What happens when artificial intelligence is available to any app, any advertising campaign, and any brand marketer? How will that change things?

Here’s my answer: the media landscape will be transformed so deeply that it will be completely unrecognizable. All the leftover junk from the 20th century will be kaputt, including one-size-fits-all video programs for mass audiences, appointment viewing of a TV schedule and the very concept of TV channels, and the outdated intrusion of interruption advertising.

Personalized programming and fully-responsive adbots will be the new norm.

Two weeks ago, I gave a speech to 4000 executives and managers at one of the biggest telecommunications companies in the world. They asked me to speak about the future of media and technology.

I decided to focus my remarks on artificial intelligence.

This was no arbitrary decision on my part. The single most important thing to understand about the new 5G networks is that they are software-defined. 5G involves a massive investment in new hardware, of course, but what’s special about 5G is the software. The entire 5G network will be designed, planned, monitored, managed and optimized by artificial intelligence. A software-defined network governed by AI will be flexible and responsive in ways that single-purpose hardware never can be.

5G will be the biggest deployment of distributed AI yet.

Today, the mobile operators are not advertising this fact widely. They are not eager to telegraph their competitive advantage to competitors.

Soon, however, they will need to speak openly about this new capability if they plan to open up their 5G networks to developers and ecosystem partners who will build fabulously inventive, unanticipated, mind blowing applications on top of the network.

They will, that is, if the mobile network operators are smart enough to invite the developers, encourage them, support them, provide stable APIs and enable them to make a huge amount of money. Historically, this has not been a great strength of mobile network operators. They are famously clumsy to the point of incompetent when it comes to dealing with partners, especially when it comes to managing their developer ecosystems, so it remains far from clear that they will execute this tactic successfully.

But I am confident that AI as platform will happen eventually because of competition. Even if the mobile operators remain as closed and controlling as they have been in the past, they will face ferocious competition from the big cloud computing platforms that are already racing to deploy AI as a service. Google, Microsoft and Amazon know a lot about managing developer networks, and they will either partner with or supplant mobile operators by making powerful AI available to developers right at the edge of the network where it is needed.

One way or another, it seems inevitable that artificial intelligence as an on-demand service will be widely available soon.

Are media and advertising companies ready to use it?

Usually, when we think about the future of media, we don’t consider AI. As I’ve noted in the previous articles in this series, the folks in the media and advertising business are intensely preoccupied with streaming video platforms, including on-demand services like Netflix and (considerably less so) with live streaming systems like Twitch.

In their defense, this myopic focus makes sense because streaming video has permanently altered consumer behavior, but at this point it is also kind of an obvious trend: there are more than 200 OTT streaming videos services available in the United States today, and thousands outside the US.

What comes next after OTT?

In my speech, I aimed to push past the obvious trends like streaming video to explore something that is still emerging and evolving. To me, right now, that’s AI for media.

Today the question “How will artificial intelligence influence the future of the media and entertainment industry?” is valid because we now have some early answers and sufficient information to speculate about its future trajectory.

But first we need to surmount a barrier of skepticism. Until recently, inside the media industry, there was deep resistance to the notion that AI will play any role whatsoever. Many creative professionals remain convinced of that today.

The prevailing notion is “Robots can’t do creative work.”

For 30 years, I’ve worked in the media and entertainment industry. I live in the heart of Hollywood, amid the greatest concentration of professional creative talent on the planet. Most of the folks I know in the media business would reflexively dismiss the notion that AI can perform any creative function better than a human artist. And nearly all of them would insist that an AI will never steal their job in particular.

But this is the wrong way to frame the question.

Fears about robots and AI “stealing jobs” from humans are overblown. Magazines flog this sensationalistic claptrap relentlessly because scary stories about rogue AIs always sell. Every magazine has succumbed to the temptation of publishing a lurid cover photo featuring a humanoid robot with glowing red eyes and a doomsday headline about the AI apocalypse. This stuff grips the public imagination, but it is poppycock.

While there is no doubt that software automation will displace some workers, there is zero evidence from 300 years of industrial activity to suggest that all jobs will be taken by machines.

The Luddite argument that machines steal jobs is kinda-sorta true in the narrowest view, because some workers are displaced every time a new system of automation is deployed, but it is wildly inaccurate in the macroeconomic perspective.

Here are the facts. Automation increases productivity as it frees human workers to attend to higher-value problems, typically at higher wages. Increased productivity is the only non-inflationary route to higher real wages for workers. Automation tends to expand the economy, which generates entirely new kinds of jobs. (I made this argument in detail in my book Vaporized, and you can read it here, here, here and here.)

A more constructive way to think about AI, robotics and other forms of applied software automation is: how will this technology enhance human labor? How will it give human workers superpowers? How will it free human workers from drudgery and rote tasks so that we can turn our attention to the more interesting and vexing complex creative tasks?

How will AI help creative talent produce ever more amazing entertainment experiences?

Let’s consider entertainment and media from this perspective. What is the most constructive way to envision artificial intelligence applied to media and marketing?

I think there are three primary questions:

  1. Can AI improve workflow in entertainment production and distribution? If the answer is yes, and AI saves time, money and human effort, then it will surely be adopted on the broadest scale.

  2. Are consumer audiences willing to engage with automated systems for entertainment and information? If yes, then this should dispel the lingering notion that humans won’t pay attention to content produced by robots.

  3. Are consumer audiences willing to pay for entertainment and information presented by robots and AI? If yes, then there’s evidence of a business model.

In my speech at the telco, I addressed all three questions with examples and evidence from early deployments that are taking place today, not in the future.

This evidence overwhelmingly supports an affirmative response to all three questions. Moreover, it points to a future of media that is more bizarre and far more exciting than anything I have ever previously designed and launched in my 30 year career.

Here’s what I told the telco executives:

1.Can AI improve the workflow in entertainment production and distribution?

Yes. There are examples of artificial intelligence in use today at nearly every stage of pre-production and post-production as well as distribution.

Consider the following examples:

Algorithms and AI in Programming and Presentation.

Streaming video pioneer Netflix famously relies upon big data analysis to determine whether or not to greenlight a new film or series. Ever since the success of House of Cards in 2013, Hollywood pundits have speculated breathlessly about exactly how much Netflix relies upon algorithms to make programming and acquisitions decisions. Chief content officer Ted Sarandos downplays the significance of data-driven programming, claiming “It’s 70 percent gut and 30 percent data.” Outsiders suspect he is deliberately underselling in order to put rivals off the scent.

Netflix has been at the forefront of algorithmically-powered recommendations for more than a decade. In the half-decade since House of Cards, the OTT giant has developed an increasingly complex stack of machine learning algorithms, including open source tools, to improve system recommendations.

Netflix uses data analysis to predict audience behavior rather than to estimate the performance of a particular program. In this sense, Netflix is in the content personalization business. The company claims that 75% of viewer activity is driven by algorithmic content recommendations. But even that figure understates just how pervasive algorithmic recommendations are within the Netflix service. Sarnados frequently points out that each viewer’s experience of Netflix is unique. As the company says, there are more than 100 million versions of the service. Even the individual key art that appears on your TV screen to promote a single episode of a show will vary from the art that other subscribers see.

Today, the most successful startup media firms begin with Netflix-style algorithmic programming built into their software architecture from the outset. AI is in their DNA.

Enter TikTok, owned by the Chinese firm ByteDance. You probably don’t use this app, and you may not have even heard of it, but your ten-year-old kids are using TikTok to watch music videos and make their own funny videos.

Part of TikTok’s extraordinarily fast growth can be attributed to machine learning algorithms which provides each viewer sees a completely unique series of clips. It is a software-defined media service, programmed entirely by AI. It’s a glimpse of the future of media.

Other new firms apply machine learning to make predictions about whether or not a film will be profitable based on the story or script. Startup ventures like Cinelytic, ScriptBook and Vault offer algorithmically-generated insights to major motion picture companies. ScriptBook claims to be twice as accurate as the major studios in predicting a particular film’s eventual box office success.

Other entertainment giants are racing to catch up with Netflix. 21st Century Fox partnered with Google, using the TensorFlow framework to develop a machine learning system intended to predict audience preferences and film performance before greenlighting a production.

From Prediction to Visualization and Content Generation

Some media companies are moving towards algorithmic generation of content. The Walt Disney Company uses artificial intelligence software to generate rapid prototypes: the Disney AI can interpret natural language descriptions of scenes and settings in movie screeenplays in order to generate storyboards and rough animation sequences.

Some ambitious startup firms are venturing further that they. They seek to generate the entire entertainment experience algorithmically. California startup rctstudio is determined to leverage AI to generate open-ended immersive story worlds in which the audience can roam freely and interact with non-player characters (artificial personalities powered by AI). Think of rct’s work as a blend between online video games and movie worlds.

AI in Post-Production

Hollywood tech firms now use AI to improve the accuracy of human characters in computer-generated imagery and digital special effects, or to insert and remove objects from a scene, and to optimize the performance of a particular movie trailer.

Part of TikTok’s extraordinarily fast growth can be attributed to machine learning algorithms which provides each viewer sees a completely unique series of clips. It is a software-defined media service, programmed entirely by AI. It’s a glimpse of the future of media.

Other new firms apply machine learning to make predictions about whether or not a film will be profitable based on the story or script. Startup ventures like Cinelytic, ScriptBook and Vault offer algorithmically-generated insights to major motion picture companies. ScriptBook claims to be twice as accurate as the major studios in predicting a particular film’s eventual box office success.

Other entertainment giants are racing to catch up with Netflix. 21st Century Fox partnered with Google, using the TensorFlow framework to develop a machine learning system intended to predict audience preferences and film performance before greenlighting a production.

LA-based VideoGorillas has developed a suite of tools called “Bigfoot” which employ generative adversarial networks (GANs) to upscale standard-resolution video to 4K quality. Other tools in the Bigfoot suite already automate the drudgery of routine editorial tasks like identifying and removing problematic shots and conforming source footage to existing master versions of shows.

None of this technology is “stealing jobs” from film editors or special effects crews. The AI is a tool to free these pricey specialists from the drudgery of scrolling through miles of footage to find a shot or wasting hours manually tweaking a special effect. The AI saves time, money and effort.

You don’t need to be a professional to use these tools. On Reddit, video game modders are using AI to upscale old video games for display on higher resolution monitors.

AI in distribution.

Machine learning algorithms are excellent at predicting whether a particular person will engage with a video clip or not. Netflix is at the forefront of applied artificial intelligence in every stage of video delivery.

But that’s not all. AI also helps to govern the quality of service to each subscriber. Netflix uses artificial intelligence to monitor bandwidth in the network and optimize a particular household’s video and audio streams based on available bandwidth and network congestion.

Netflix even uses artificial intelligence to monitor whether subscribers share their passwords.

AI can also aid monetization of video by improving the environment for advertising. Today several firms compete to offer systems powered by AI for brand safety, efficient targeting and more completed views. Now that programmatic delivery of ads is well established, the next frontier for advertisers will be to use AI to increase the relevance by personalizing the message.

Let’s cap this first question with an enthusiastic yes. Whatever can be automated, will be automated. Today it is easy to find many examples of companies in the motion picture and entertainment industry that rely upon artificial intelligence in every stage of planning, programming, pre-production, pre-visualization, post production, distribution and monetization. Some of these applications are more advanced than others. There’s room for improvement in all of them. But each confers an advantage on the companies that use it today. AI in media is here to stay, and it will just get better and better.

2.Are consumer audiences willing to engage with automated systems for entertainment and information?

Yes. Today there are numerous examples of large numbers of consumers interacting with automated systems in ways that would have been considered preposterous science fiction fantasy just a decade ago.

Would you have a conversation with your home stereo? Ten years ago, this would have been a silly question. Today 100 million people talk to Alexa. If we include smartphones and other smart speakers, then the number of people interacting with voice-driven AI assistants soars even higher into the many hundreds of millions. That’s people talking to machines. Think about that for a moment. And if you still consider Apple’s Siri “embarrassingly inadequate”, just remember that this is an incredibly complex feat of engineering. Like all technology, this is the worst it will ever be, as each successive generation continues to improve.

Would you ask your TV to find something to watch? Thanks to embedded Alexa and Google assistants and improved smart TVs, the habit of talking to your television or to your remote control is now commonplace.

This could seem like an absurd new habit, until you compare it to what came earlier. Back in the days of satellite and cable TV, we used fumble with overcomplicated remote controls, jabbing with our fat fingers on tiny buttons to navigate through endless scrolling menus. Looking back at cable TV’s cumbersome interface from today’s vantage point, that was clearly absurd.

Ten years ago, it was the opposite. The only idea that sounded dumber than a television that you talked to was the infamous “smart refrigerator”, a durable trope from CES that dates all the way back to the 1990s. Back then, we all agreed it was a stupid idea. Now it’s here. There are now Alexa-powered refrigerators, microwaves, cars, mirrors, showers and even toilets, whether we asked for them or not. All of this proves the age-old observation that new consumer technologies are often considered preposterous until the technology advances; then when they work acceptably well, we begin to take them for granted and promptly forget what came before.

Would you chat with a robot? Until recently, this question would have been elicited memories of failed 1960s experiments like the original Eliza textbot.

Today chatting with machine intelligence is commonplace. Millions of people interact with chatbots via messaging apps like Kik and WeChat every day.

Many folks prefer this type of dialog to dealing with conventional customer service. Unsurprisingly some customers find it more satisfying to communicate with companies via their chatbot instead of coping with a disempowered human operator who happens to be sitting in a call center in Bangalore. In 2018 more than 2 billion business-related messages were sent via 30,000 chatbots in Facebook Messenger.

Baby boomers seem to enjoy chatbots even more than Millennials. Inventor and AI cheerleader Ray Kurzweil predicted, “If you think you can have a meaningful conversation with a human, you’ll be able to have a meaningful conversation with an AI in 2029. But you’ll be able to have an interesting conversation before that.”

Would you accept broadcast news that is read by a robot anchor? Maybe the US is not quite ready for this level of innovation, but it’s a fact in Russia and China where robot newsreaders are already on TV. (As Broadcast News revealed in 1987, the newsreader is not necessarily the person who does the investigative journalism or the one who even writes the copy. In many cases the human newsreader does just that: he or she reads the news displayed on a teleprompter while seated in a studio looking pretty. That’s it. In other words, the newsreader is a biological robot ripe for displacement by a machine).

Would you read a newspaper written by robots? You probably already do, at least in part. Most readers of newspapers cannot distinguish between articles written by human reporters and those generated by automated systems like Narrative Science. Today automated news coverage is limited mainly to simple reporting of sports scores, weather and stock market results, but the learning algorithms are improving constantly.

Would you listen to music generated by AI? A German app called Endel has been signed by Warner Music in the first-ever multi-album deal for a non-human recording artist. Endel is under contract to deliver 20 albums of mood music by the end of the year.

For years, fans have speculated whether services like Spotify relied upon machine intelligence algorithms to generate ambient music and generic dance tracks by fake artists. Now Spotify has come clean: AI finally has its own genre and playlists.

It’s not all ambient trance music, either: now an AI-fueled live stream generates endless death metal on YouTube.

Currently, better results come from joint efforts between human composers and AI systems. An entire cottage industry of firms has sprung up, offering AI tools to burnish the efforts of recording artists and generate ideas for fresh tracks. It’s a growing field with plenty of room for improvement.

See? This reinforces the initial point above about robots stealing human jobs. AI is not replacing the recording artist, it’s making the artist better. Just ask Billboard Magazine, the house organ of the music labels.

Would you watch a film written by an algorithm? Honestly, you wouldn’t. Not yet, at least. So far, screenplays written by AIs are most notable as the occasional oddball novelty rather than compelling entertainment. Why is this the case? More on that below.

But wait. Before you reject the notion that artificial intelligence can generate a script, consider this Lexus commercial that was written by an AI.

Would you play a video game against an AI? If so, you may have a death wish. Seriously, AIs have been crushing human opponents in an unbroken succession in every obscure corner of the gaming racket since Deep Blue beat chess grandmaster Gary Kasparov, followed by Watson trouncing two Jeopardy! champions, followed by AlphaGo thumping Lee Sedol in Go. And now there’s an AI that kicks professional DOTA player butt. Do not accept a challenge to play against an AI. You have no chance, puny human. You’ll just be humiliated, and you’ll regret it when SkyNet takes over.

To summarize this section, there’s no question that human audiences enjoy interacting with AIs and automated systems, even though some of these systems are works-in-progress. Today there are numerous examples of humans interacting with artificial intelligence at scale on a daily basis to obtain information, instructions, search results, news, and even original entertainment. It no longer seems odd to shout at your stereo, “Hey Google, play the Chainsmokers on Spotify.” We seem to be at the early stage of developing a rapport, even a relationship of dependency, with non-human intelligence, even though other tasks like rich storytelling remain out of reach of AIs today.

3.Are consumer audiences willing to pay for entertainment and information presented by robots and AI??

The examples cited in the previous section don’t necessarily prove whether or not consumers will find entertainment generated by machines truly valuable enough to warrant a cash payment.

It’s one thing for a machine to provide passable entertainment or news for free; it’s another thing entirely to persuade audiences that machine-generated content is worth paying for.

If we build it, will they pay? The surprising answer, based on all available evidence, is hell, yeah.

The best examples come to us from music tours. Concert tickets are expensive and competition between tours is fierce. It’s a tough market.

Now there’s a growing cottage industry that specializes in bringing dead celebrities back on stage in the form of rudimentary hologram simulations in otherwise live concerts.

Dr Dre and Digital Domain introduced the idea in 2012 at Coachella, when Snoop Dogg performed on stage with Tupac Shakur. The performance was notable because, 15 years earlier, Tupac had been killed in a shooting spree in Las Vegas.

Until showtime, no one in the cast or crew was quite certain how the audience would respond to a reincarnated rapper. In the end, Tupac’s return was a smashing success, and Dre had the good sense to limit the engagement to two performance.

Since then, plenty of dead celebrities have been digitally exhumed as performing zombies, including Michael Jackson, Roy Orbison, and Frank Zappa. A planned Amy Winehouse tour was recently postponed, and a Whitney Houston hologram revival was announced two weeks ago.

The dead-celebrity-concert-tour business is pretty weird and legally complicated, as this Vox article explains. But the consistently positive reaction from audiences suggests that there is an appetite for simulated crooners that goes far beyond ghoulish novelty. A long list of dead singers, ranging from Patsy Cline and Marilyn Monroe to Maria Callas is currently under consideration for, ahem, revival tours.

Just to prove that this gimmick is not solely reserved for dead artists, last month Madonna used augmented reality to present not one but five versions of herself dancing during a live performance at the Billboard Music Awards. Let’s hear it for split personalities, live on stage!

Weirdness aside, the success of such virtual concert performers indicates that audiences are clearly willing to pay for entertainment that is generated by machines.

What about an entirely synthetic performer? Will audiences accept a singer designed from scratch in software instead of one derived from a living or dead celebrity? Again, the answer seems to be yes.

The best example remains Hatsune Miku, the animated singer from Japan whose concerts consistently sell out when she tours. Miku, whose Japanese name means “first sound of the future”, began her existence as a Vocaloid, a voicebank powered by vocal synthesizer software from Sapporo-based Crypton Future Media and Yamaha.

Watch this video:https://www.youtube.com/watch?v=Z5vxRC8dMvs

  • LinkedIn
  • Facebook

©2018 by B-AIM