NeuroMovies and Infinite Experiences
3.5 years ago, I wrote this piece attempting a prediction of the future of content creation. I’m on the right path.
Do not index
Do not index
I wrote this piece on Jan 7, 2020. A couple of days after Dall-E was announced, and a day after Trump’s supporters stormed the Capitol.
I cobbled this piece over a 3-hour session and shot it across to a few folks for feedback. The hope was to polish it and publish but I never really got down to it. Until now, it lingered forgotten in the digital graveyard of my Notion.
It’s been 3.5 years since I wrote the piece and there’s so much more advancement that I thought would be. The essay, however, remains untouched save for grammar and typo fixes. I do plan on writing a few updates down the line.
About Buried Notes
I’ve been writing and documenting my thoughts, ideas, and observations (on myriad topics) over the last decade. But I’ve never published them. Buried Notes will be where I share these raw notes, albeit with a little polishing.
We're starting to get a glimpse into the future of content creation. On Jan 5, 2020, OpenAI introduced Dall-E. As explained in their own words 'it's a neural network that creates images from text captions for a wide range of concepts expressible in natural language'. After GPT-3, it was inevitable but - what's yet to take the world by surprise - it has definitely arrived a lot sooner than expected. For those not familiar with GPT-3, it is a pre-trained autoregressive (it predicts future values based on past values) language model that uses deep learning to produce human-like text. I lifted that off of Wikipedia because the explanation is as simple as it gets.
Back to Dall-E. What it means is that the model can create an image of 'an orange-haired person who goes on to become the president, loses his mind, and insurrections radical white supremacists' all by just giving it as text input. The output is, of course, Donald Trump last night.
To be honest, I don't know if it would be the actual output of Dall-E, but one can be hopeful.
Apologies for leading you on. Below is an example of what it can do given a text prompt. The image is sourced from the OpenAI blog.
There is a content creation explosion that is about to begin. In fact, it's already happening on social platforms and even gaming - there are 300k+ game developers on Roblox. But the bang's gonna be bigger.
The future of content creation will be AI-assisted. From conceptualization to production, AI-driven tools will enable creation at low costs. What costs a million dollars today will take a tenth or maybe even a hundredth of the cost.
A whole host of companies will rule the landscape:
- Creation Tools
- Distribution Tools
- Social platforms
- Tools that allow creators to own their audience
- Networks: OTTs
- Radical Technology: Neuralink
Note to self: Expand on this. Add more companies, and turn it into a graphic.
With this as context, I'd like to take a stab at what the future of content looks like through a short story. Strap in, this is going to be a wild ride through my imagination.
Lisa powered down her work system. It had been a fairly intense day of training and fine-tuning algorithms at the RoboDojo. What started out as complex work, was beginning to turn into routine. AI helpers were now around to assist her every step of the way. Her job was at stake they said. As much as she'd have wanted to, she couldn't really upskill any further. A Ph.D. didn't go as far as it did a decade ago.
Before she could sink into another bout of self-induced anxiety, she was yanked back to the present. It helped that she had the skills to root her Neuralink and write on it. She didn't trust the chemicals any longer. Not that Neuralink wasn't without its risks. But her trust in Morpheus was unshakeable; after all, she was a contributor to the most sophisticated software that protected one's Neuralink from external threats.
Of course, she could've done without inserting the contraption in her brain, but having Neuralink put her on equal footing. Having lost much of her muscular strength from the Cripple-33 virus, the chip eliminated the need for hands to do the heavy lifting. After all, her beloved Uncle Alex had said 'The mind is the most powerful thing in the world. Even a thought could lift it’. You may get the reference to what 'it' is, but back when her Uncle Alex was alive, she was too young to understand what it meant. She chuckled at the wisdom.
As she sunk into her chair, letting the end of the day overwhelm her with relief, she decided it was time for a movie. She nestled into the headrest, closed her eyes, and fired up the latest movie by the renegade creator - Kawabata. And though he was still in hiding, he continued to release movies like it was nobody's business. A Call to Arms was about Joanna, a rebel leader of the underground movement in a dystopian world. Not too different from what it is now, Lisa thought to herself.
Back in the day, she wasn't one for the movies. They were pictures on screen and one was expected to experience one's own interpretation of what the actor was going through in the scene. Neuralink changed that. Someone had figured that it made sense for actors to uplink their emotions into the movie. To call them actors was a misnomer but the term had persisted. Actors these days, were no longer real people. Every character was computer generated. Yet, it was hard to tell the difference. Nonetheless, they employed people to bring human emotion into the movies. It's not that computers couldn't do it. It's just that people clung to the one last vestige of humanity - our ability to emote.
The filmmaking process, like everything else before, had transformed into something else altogether. Technology had simplified everything leading to a creator explosion. GPT-69 and ScriptBook made scripting easy - not that one needed to write a 'script' in the old sense of the word. All they did was latch onto one's Imagineering, fill in the blanks, fact-check if necessary, and ensure sanity in terms of character hygiene and a host of other services. Those into mass-produced content went as far as to use INSPO to get a sense of what people might like. Dall-E v28 - yet another OpenAI creation - along with Unity, Unreal Engine, Spline, and other tooling translated the Imagineering into standard video formats as well as the NeuroMovie format. And, for the benefit of those who did use Neuralink, actors emoted every character in the film. A movie was an experience, finally.
But that wasn't all. Unlike movies of yesteryear with linear flows and single endings, NeuroMovies were powered by rct AI's Chaos Box. Movie creators could add breakpoints in the narrative. At each of these breakpoints, consumers could take control of any character of their choice and use their own emotional output to craft the narrative. The other characters in the movie would adapt to accommodate the new narrative to ensure it stayed in sync with the storyline, all thanks to Chaos Box. Now, films had several outcomes with infinite experiences.
Lisa enjoyed the movies these days. As the opening scene of Kawabata's dystopia took shape, she forgot everything else that constituted the day and took charge of Joanna's destiny. These days, she couldn't tell the difference between characters in a movie and her fellow people. Not that it mattered. Being human was overrated anyway.
The future belongs to the creators. I don't think we're too far away from this vision. My guess would be 10-15 years for Lisa's story to sound commonplace. But there are already parts of it coming alive and the next 5 years will be unlike what we thought (or didn't really think) it would be.
Hopefully, Imagineering will become a thing. I for one am looking forward to it. I do have a wild imagination and would love to see it come to life because honestly, I'm not too good at creating.
And now, I must take your leave for an appointment I must keep. Meeting a friend at the metaverse at the end of the universe.
This piece was inspired by obsessive thinking after consuming content from the a16z blog, Mathew Ball’s blog, and a host of other great writers. It would be worth your while to take a look at Meet Me in the Metaverse and listen to The Social Serendipity of Cloud Gaming. But the real prompt was the announcement of Dall-E by OpenAI. I cobbled this over a 3-hour session today morning.
This piece was written while listening to The Sight Below and other artists - The Stars of the Lid, Taylor Dupree, Loscil, and Jan Jelinek. If you like ambient, drone, or non-dance electronic, their work is dope. If you haven't tried these genres, hit play on the right side.
For now, this is an empty space but I’ll add notes that serve as an update. 3.5 years since I originally wrote this piece, there have been significant developments - new AI models and companies, signs of how content creation is changing, increased debate around AI’s benefits and dangers, etc.
This piece is here for posterity.