Be a part of our each day and weekly newsletters for the newest updates and unique content material on industry-leading AI protection. Be taught Extra
Midjourney is finest referred to as one of many main AI picture mills — with practically 20 million customers on its Discord channel, in keeping with third-party trackers, and presumably extra atop that on its web site — however its ambitions are starting to increase.
Following the information in late summer season 2024 that it was constructing its personal computing and AI {hardware}, the corporate this week launched a brand new analysis paper alongside machine studying specialists at New York College (NYU) on coaching text-based giant language fashions (LLMs) resembling Meta’s open supply Llama and Mistral’s eponymous supply fashions to put in writing extra creatively.
The collaboration, documented in a new analysis paper revealed on AI code group Hugging Face, introduces two new technieques — Diversified Direct Choice Optimization (DDPO) and Diversified Odds Ratio Choice Optimization (DORPO)— designed to increase the vary of attainable outputs whereas sustaining coherence and readability.
For a corporation that’s finest identified for its diffusion AI picture producing fashions, Midjourney’s new strategy to rethinking creativity in text-based LLMs exhibits that it isn’t limiting its ambitions to visuals, and that, an image could not really be price a thousand phrases.
Might a Midjourney-native LLM or fine-tuned model of an current LLM be within the playing cards from the small, bootstrapped startup? I reached out to Midjourney founder David Holz however have but to listen to again.
No matter a first-party Midjourney LLM providing, the implications of its new analysis transcend educational workouts and might be used to assist gasoline a brand new wave of LLM coaching amongst enterprise AI groups, product builders, and content material creators seeking to enhance AI-generated textual content.
It additionally exhibits that regardless of latest curiosity and funding amongst AI mannequin suppliers in new multimodal and reasoning language fashions, there’s nonetheless loads of juice left to be squeezed, cognitively and performance-wise, from traditional Transformer-based, text-focused LLMs.
The issue: AI-generated writing collapses round homogenous outputs
In domains like fact-based Q&A or coding help, LLMs are anticipated to generate a single finest response.
Nonetheless, inventive writing is inherently open-ended, which means there are a lot of legitimate responses to a single immediate.
For an instance supplied by the Midjourney researchers, given a immediate like “Write a narrative a few canine on the moon”, the LLM may discover a number of various paths like:
- An astronaut’s pet canine by chance left behind after a lunar mission.
- A canine who finds itself in a futuristic canine house colony.
- A stranded canine that befriends an alien species.
Regardless of this vary of prospects, instruction-tuned LLMs usually converge on related storylines and themes. This occurs as a result of:
- Put up-training methods prioritize consumer desire over originality, reinforcing widespread however repetitive responses.
- Instruction tuning usually smooths out variation, making fashions favor “secure” responses over distinctive ones.
- Present diversity-promoting methods (like temperature tuning) function solely at inference time, quite than being baked into the mannequin’s studying course of.
This results in homogenized storytelling, the place AI-generated inventive writing feels repetitive and lacks shock or depth.
The answer: modifying post-training strategies to prioritize variety
To beat these limitations, the researchers launched DDPO and DORPO, two extensions of current desire optimization strategies. The core innovation in these approaches is using deviation—a measure of how a lot a response differs from others—to information coaching.
Right here’s the way it works:
- Throughout coaching, the mannequin is given a writing immediate and a number of attainable responses.
- Every response is in comparison with others for a similar immediate, and a deviation rating is calculated.
- Uncommon however high-quality responses are weighted extra closely in coaching, encouraging the mannequin to study from various examples.
By incorporating deviation into Direct Choice Optimization (DPO) and Odds Ratio Choice Optimization (ORPO), the mannequin learns to supply high-quality however extra different responses.
This methodology ensures that AI-generated tales don’t converge on a single predictable construction, however as an alternative discover a wider vary of characters, settings, and themes—simply as a human author would possibly.
What Midjourney’s researchers did to attain this
The examine concerned coaching LLMs on inventive writing duties utilizing a dataset from the subreddit r/writingPrompts, a Reddit group the place customers submit prompts and reply with brief tales.
The researchers used two base fashions for his or her coaching:
- Meta’s Llama-3.1-8B (an 8-billion-parameter mannequin from the Llama 3 sequence).
- Mistral-7B-v0.3 (a 7-billion-parameter mannequin from Mistral AI).
Then, they took these fashions via the next processes:
- Supervised Effective-Tuning (SFT): The fashions have been first fine-tuned utilizing LoRA (Low-Rank Adaptation) to regulate parameters effectively.
- Choice Optimization:
- DPO and ORPO have been used as baselines—these commonplace strategies deal with enhancing response high quality based mostly on consumer desire indicators.
- DDPO and DORPO have been then utilized, introducing deviation-based weighting to encourage extra distinctive responses.
- Analysis:
- Computerized analysis: Measured semantic and stylistic variety utilizing embedding-based methods.
- Human analysis: Judges assessed whether or not outputs have been various and interesting in comparison with GPT-4o and Claude 3.5.
Key Coaching Findings:
- DDPO considerably outperformed commonplace DPO by way of output variety whereas sustaining high quality.
- Llama-3.1-8B with DDPO achieved one of the best stability of high quality and variety, producing responses that have been extra different than GPT-4o whereas sustaining coherence.
- When dataset measurement was lowered, DDPO fashions nonetheless maintained variety, although they required a sure variety of various coaching samples to be absolutely efficient.
Enterprise implications: what does it imply for these utilizing AI to supply inventive responses — resembling in advertising and marketing copywriting, company storytelling, and movie/TV/online game scripting?
For AI groups managing LLM deployment, enhancing output variety whereas sustaining high quality is a essential problem. These findings have vital implications for organizations that depend on AI-generated content material in functions resembling:
- Conversational AI and chatbots (guaranteeing different and interesting responses).
- Content material advertising and marketing and storytelling instruments (stopping repetitive AI-generated copy).
- Sport improvement and narrative design (creating various dialogue and branching storylines).
For professionals answerable for fine-tuning and deploying fashions in an enterprise setting, this analysis gives:
- A brand new strategy to LLM post-training that enhances creativity with out sacrificing high quality.
- A sensible various to inference-time variety tuning (resembling temperature changes) by integrating variety into the training course of itself.
- The potential to develop extra participating AI functions, from AI-assisted writing instruments to digital assistants that may adapt their responses dynamically.
For these dealing with AI mannequin orchestration and automation, this analysis highlights:
- The significance of tuning fashions on the coaching stage, lowering the necessity for post-processing changes at deployment.
- A solution to introduce adaptive storytelling into AI-driven functions, guaranteeing variability whereas conserving content material high quality excessive.
- A way for making LLM outputs extra human-like, which is essential for functions requiring interactive storytelling, buyer engagement, or dynamic content material creation.
The way forward for AI generated inventive initiatives appears to be like vibrant
The success of DDPO and DORPO demonstrates that coaching LLMs with diversity-focused goals can yield vital enhancements in inventive writing. Some concepts embody:
- Integrating deviation-based studying into enterprise AI fashions to reinforce response variety in customer-facing functions.
- Exploring how these strategies apply to different generative duties, resembling AI-powered poetry, screenwriting, or sport storytelling.
- Growing hybrid coaching approaches that stability variety and instruction-following capabilities for AI assistants.
For these curious about making use of these methods, the researchers plan to make their code publicly out there on this GitHub Repository
Whether or not you might be fine-tuning LLMs for enterprise functions or optimizing large-scale AI orchestration, this examine gives actionable insights into how fashions may be extra dynamic, participating, and aware of inventive duties.
By adopting these methods, AI groups can transfer past inflexible, formulaic outputs—constructing AI methods that aren’t solely good but additionally really imaginative.