Summer 2024 Update
Since we published Stack Overflow’s Industry Guide to AI in January 2024, there have been many changes in the rapidly evolving AI market. In this Summer 2024 update, we highlight several key areas where the cost, complexity, and capabilities of GenAI systems have changed in the past few months such that making the right decision for your organization requires taking stock of these new developments.
Costs are coming down. In January 2023, the price per token to access powerful GenAI models from vendors like OpenAI, Google, and Anthropic was $2-5 per million tokens. Since then, the cost has fallen by a factor of 75% or more. For example, at launch, OpenAI’s most capable model cost $2 per million tokens. A more capable version of that same model now costs $0.5 per million tokens (from OpenAI's GPT-4o announcement in May). With the release of each new model, costs seem to fall by 50% or more. If trends from the last 18 months continue, these prices will continue to push towards zero, making it far more appealing to rely on a third-party AI model versus building your own in-house.
Multi-modal capability. When we first published this guide, we focused solely on the capabilities of text-based LLMs. While multi-modal models were around at the time, they were complex and often not well-integrated into a packaged suited for enterprise software. Over the past six months, however, most major GenAI providers have become multi-modal by default. This means you can ask questions not just about documents and code inside your company, but also about graphic presentations, video files, and audio notes. This opens up an enormous new ability to centralize and tap into the information your organization has built up over time, much of which may be difficult to locate with just a file name and date. This also underscores the need for a knowledge management platform like Stack Overflow for Teams to build and maintain your company’s knowledge.
As AI comes to understand not just text but also audio, images, and video, it can unlock a wealth of knowledge built up at your organization over time that would otherwise not be easily located with a file name and date.
Reduced complexity with more security offerings. One of the most difficult parts of committing to build a GenAI application inside a large organization is ensuring that the LLM you’re working with will respect the rules you have established for privacy, security, and governance of your proprietary data. Recognizing this challenge, many of the large GenAI providers are now offering clients the ability to keep data in a private and secure cloud, limiting the risk that something will be unintentionally exposed. Higher-level prompt engineering, chain of thought reasoning, and multi-agent workflows are also being utilized to verify output before it’s shared with end users, helping to flag and edit potentially toxic or confidential material.
Tokens, tokens, everywhere. One of the limiting factors of GenAI assistants when we first published this guide was that they could only work with a certain amount of context. Provide too much material—say several years worth of law briefings—and the system simply couldn’t hold and reason with all that information while taking in queries and generating its output. Advances in the architecture of GenAI systems, however, have expanded the context window, with providers like Google Gemini now promising a million, perhaps even ten million tokens. With this change, end users can potentially avoid the work of fine-tuning a model or building one on top of open source. You can now simply take the most powerful foundation models currently on the market and provide them with a massive amount of context about your company or industry, allowing them to become far more informed about your internal knowledge or particular field of interest.
Quality data is king. As Mark Zuckerberg explained while discussing the creation of Meta’s latest Llama 3 model, data quality is key to GenAI performance. Meta chose to overtrain on technical data like code because doing so seems to add extra logic and reasoning capabilities to LLMs. We at Stack Overflow feel the same way, which is why we created OverflowAPI and why we are bringing Stack Overflow’s wealth of technical knowledge—over 15 years of the highest-quality Q&A material about programming—to users of Stack Overflow for Teams.
In true AI fashion, we asked our own AI chatbot StackPlusOne for a quick synopsis of the above points:
The landscape of the AI market has undergone remarkable transformations since January 2024, marked by a significant reduction in costs and the evolution of GenAI models to embrace multi-modal capabilities. These advancements not only make third-party AI solutions more accessible but also enhance the ability of organizations to leverage diverse data types for comprehensive analysis. Furthermore, the emphasis on quality data and improved GenAI architecture highlights the critical role of high-quality inputs in achieving superior AI performance, ensuring that organizations can navigate the complexities of privacy and data security more effectively.