Navigating the AI Landscape: A Software Developer's Take

Summary
Close

Are you confused about the current state of AI and where things are going, curious about it or just want to hear what others think? Then this article is a good read for you!

I've been using GitHub Copilot for work long enough that I don't remember when I started using it. After the chat feature was introduced, I ditched ChatGPT as well, since it's been able to handle what I used it for, like generating tests, mock data generators, etc.

I felt like even though I'm pretty satisfied with the tools I have and things are changing a bit too rapidly for the sideliners, it's time to get a better understanding of the current state of things in the AI/ML world since the opportunities are getting more and more real and applications are too interesting to ignore.

This article is a summary of my observations and thoughts after diving into it and following actively for a while.

DISCLAIMER: These are my personal observations as an experienced developer. I'm not working in the AI field directly. I care more about the practical applications and tools than the theoretical aspects of AI. I did validate this article with Brightly's AI lead to ensure I'm not saying anything too stupid.

Also, by not being from this field directly, it might help you as a reader since I'm skipping the jargon parts.

DISCLAIMER 2: this article was completed on March 3rd, 2025. Who knows how long it will be relevant 😄

The State of LLMs

Large Language Models (LLMs) have been dominating the conversation, and not many people outside of tech understand anything about other models that have already much more direct practical applications.

Since GPT-4, we haven't seen any truly groundbreaking leaps in response quality for everyday users, nor have these incremental improvements unlocked significant new use cases. Recent meaningful improvements have been more about:
- Increasing the amount of context you can provide when asking for stuff
- Improved reasoning capabilities
- Faster responses
- Smaller, less resource-hungry models have improved significantly.

It feels like current LLM solutions are bumping against their limitations, and we need some fundamental architectural shifts to see another surge of major visible progress that regular people can see and feel.

The initial wave of hype and job-stealing anxieties has also calmed down a bit.

One interesting development, though, is Deepseek. It's shown that smaller players, with the right resources and talent, can build their own custom LLMs, breaking free from the constraints of existing providers. Also, it being open source is huge because anyone can then use it as a basis for their own spins and custom models. DeepSeek brought the reasoning capabilities to smaller models and big boosts to answer quality. Improvements in the smaller models will help decentralizing the field.

Right now, the focus seems to be shifting to optimization and, crucially, the development of better tools around the models.

The AI Toolkit

We're going to see an explosion of specialized AI tools designed to make the workforce more efficient. Cursor is a great example of this trend.

Agents: The Potential

Speaking of tools, AI agents are the new talk of the town.

Imagine this: Instead of hiring a bunch of engineers for building and training a whole new model for handling e-commerce refunds, you could give an agent access to your existing refund data and endpoints to the refund processing system.

Then, with some clever prompting, you could train it to handle returns autonomously. Something like:
You are a customer support agent specializing in handling refunds. Your job is to study the existing refund data found from X and tell me the key points to detect a fraudulent refund. Once I give you the permission to handle refunds, by then you have been given access to calling endpoint Y to process refunds and endpoint Z to flag suspicious refunds for manual review. If you are unsure of any refund, you can send me a message directly and try to learn from the answers.

After this, you can prompt some test cases and see how it performs. With good luck, you have automated a big part of your refund handling process with just one prompting-savvy person with some time on their hands.

You might also want a data scientist to go through the training data if you want any chances for the agent to be better than the current process.

That's the potential at least! I haven't been testing agents properly yet myself, but have heard that they can be quite inconsistent if not given very small context and very specific instructions.

If we are able to play with bigger context later on and considering that even smaller players could make their own custom models around this, the possibilities are crazy.

Will these agents steal our jobs? I doubt it if companies focus on growth/improving and not just cutting costs and employees are willing to be versatile. But they *will* likely reshape certain roles, especially in customer service, process management, and information flow.

ML Development: The Value

Meanwhile, traditional ML model development continues steadily, offering reliable solutions for specific tasks. As long as you have the data to train them and resources to hire the talent, these models will be better overall for the tasks they are designed for the foreseeable future.

A computer can do all kinds of things, but a computer with similar specs to a PlayStation will never beat the PlayStation at running games. LLM being the computer and specialized models being the PlayStation in this analogy.

Generative AI: The Wild Card

Generative AI is advancing at a breathtaking pace. This field is exploding, but its future is still uncertain. A lot depends on how regulations and societal acceptance evolve because of copyright issues and the potential for misuse.

Players like Adobe and other tech companies with a lot of copyright material are strong here because they won't have to worry about the regulations so much since they can train their models with their own material.

AI-generated content in most cases probably will end up as non-copyrighted material and that can limit the use cases. But in their case, they can most likely claim/allow claiming the copyrights.

There is a lot of potential all around for generative AI like what you can do with the ability to generate images, videos and music on the fly.

But finding real world cases where the tech is actually used and not as experiment is less talked about so here's some non LLM ones I know:

Image generation

- Heavily used in social media and blog posts for generating relevant images for the content.
- Removing a person from an image or seeing how you would look like with pink hair has never been easier.

Music

- Producers have used Suno for inspiration and I've bumped into videos in youtube where creators have used AI generated backround music.

Video

- A lot of old movies and historical footage has been colorized and upscaled using gen AI tech.

Video games

- AI generated assets have been heavily used in game prototyping and some games have used them in production as well, but mainly for more minor background assets.

Simulation

- AI generated synthetic data has been used in simulations in medical and biological research.

Here's more from customers of Google if you are interested.

Processing Power: Nvidia's Reign

Let's talk hardware a bit.

We're not seeing huge performance gains from individual processing units anymore. The current strategy is all about scaling.

Nvidia still currently reigns supreme in the processing power arena, but how long will that last?

Nvidia's success is a combination of early entry and their CUDA platform. They don't have any secret sauce that others can't eventually replicate.

I suspect this landscape will shift.

The broader trends in AI development are likely to lower the barrier to entry, giving other hardware manufacturers a chance to compete, especially if they can offer good price-performance ratios. Deepseek not using CUDA directly might also give some more confidence for customers to consider other platforms.

Looking Ahead: Predictions

Democratization of AI: We'll see more accessible and affordable AI tools and platforms from various companies including smaller businesses and even individuals.

Specialized AI: The focus will shift from the general-purpose LLMs into tweaking them into more specialized AI solutions tailored to specific industries and tasks.

Ethical AI: Discussions around ethical considerations, bias, and responsible AI development will intensify.

Conclusion

The AI landscape is evolving rapidly, and by now it's very clear that AI solutions and tools are here to stay. Countries not investing in AI will likely fall behind in the global economy in the long term, so the investments will continue to grow considering the capitalistic nature of the modern world.

Honestly, I preferred the time when I could just focus on building software and not worry about the AI stuff, but I can see the good as well as long as the benefits are not going to just increase the wealth gap issues around the world.

About the author
Lauri Meltti

Lauri is a passionate full-stack developer now offering his skills to large companies as a consultant. With strong expertise in JavaScript, he loves creating high-quality web solutions. He prioritizes understanding the business behind the code and delivering real value to users. As a versatile, hands-on problem solver, he is resilient and constantly seeking to improve and learn. His early career in startups and small companies honed his ability to bring creative solutions to the table efficiently. He enjoys collaborating with other passionate professionals to build interactive, high-quality products.

How can
we help you?
Are you looking for data driven digital solutions that add business value? Our senior technical experts help you build just the right solutions for your unique challenges and operational environment.