AI, Agents, and Hybrid Platforms: How Technologies Are Rebooting the Cloud

Technologies

How cloud technologies are changing under the influence of AI, how AI assistants differ from agents, and what security challenges arise when neural networks are introduced

How Clouds Are Changing Under the Influence of AI

— How is the approach to cloud management changing with the development of AI?

— The approach is changing dramatically, not only to managing cloud infrastructure, but also in general to how people interact with digital systems, interfaces, and applications.

The classic search engine is fading into the background – it is being replaced by neural network tools, such as, for example, GigaChat, ChatGPT, where the user wants to immediately receive a relevant, human, easy-to-understand answer. And not just find a link, but get a specific personalized result.

We looked at this trend and created our own solution based on generative AI — the AI ​​assistant “Claudia”. This is not just a chatbot — it is a system that helps automate a large number of tasks related to cloud infrastructure.

For example, a developer can already give Claudia the task of deploying a virtual machine, setting up a connection, deploying an infrastructure monitoring dashboard, and setting up alerting. Plans call for adding the ability to create a Kubernetes cluster or database through a dialogue with Claudia. Moreover, the solution also helps those who are not involved in IT at all — a conventional marketer or small business owner can quickly build a website or launch a landing page. All that is needed is to describe the task in text, like in a regular messenger. And the necessary actions will be performed by the AI ​​assistant, and not just voice recommendations.

— In recent years, there has been increasing talk about AI assistants and AI agents. What is the fundamental difference between these entities and how does it manifest itself in cloud platforms?

— Here’s a simple analogy: a chatbot is a vending machine. It can only perform the actions that are pre-programmed into it. It’s based only on programmed scenarios. If you want to buy a drink, the machine will give you exactly what you put in it.

An AI assistant, or AI assistant, is already a “big brother”. It will not do everything for you, but will tell you how best to act, what steps to take, what to try. Such an assistant can work with a large array of information and help in decision-making, but the action is up to you.

But an AI agent is an autonomous system. It sets and executes a task, and can do it without human intervention. The agent initiates actions, accesses the necessary sources, launches processes – for example, it can not only tell you how to book a hotel, but do it itself. It can process a request, enter data into the CRM, select a tour and make a reservation – all in the background. In the cloud, an agent can already deploy a basic infrastructure on its own in the near future, for example, launch a virtual machine or set up monitoring. Here, the AI ​​assistant becomes an access window to a complex digital system – the user sees a communication interface, and under the hood, a full-fledged multi-agent architecture works.

— What tasks are AI assistants and agents already helping to solve in real business scenarios today?

— Technologies already cover a whole range of areas — from user support to optimization of complex industrial processes and supply chains. I would name four key scenarios.

The first is user support. The introduction of agents and assistants has a tangible effect. For example, now every tenth question to Cloud.ru customer support is processed by AI. The plan is to increase the share to 70% without reducing the quality.

The second scenario is marketing and sales. AI helps marketers cope with tasks related to creativity, “white sheet”, content creation. In our company, the AI ​​assistant “Claudia” promptly generates preliminary calculations for commercial offers directly in chats with customers.

The third scenario is development. We actively use AI assistants in development teams. In addition, there are solutions like GigaCode that help others implement generative AI in development. This is also a strong trend: tools like GitHub Copilot are no longer exotic, but part of everyday work.

If we look at it more broadly, then multimodal AI is coming to the forefront globally. An example is a recent Google presentation: a user asks how to fix a bicycle, and the assistant builds a sequence of actions using an image and text. This is an agent system and a multimodal neural network in action. We are also moving in this direction in Russia. More and more companies are testing such scenarios.

— To what extent does the implementation of AI agents in cloud management today correspond to global practices?

— We at Cloud.ru have developed our own cloud environment for creating Cloud.ru Evolution AI Factory agents. Our colleagues from other Russian companies are also working in this direction, creating LLM platforms and AI services.

Most tools are still built on open source technologies. This provides certain advantages for wider application of AI. Firstly, Russian developers take an active part in the development of international open source projects. We make a serious contribution to the development of the global AI ecosystem. And secondly, even relying on open solutions, you can create unique products. Therefore, I believe that our market has made significant progress in the development of AI solutions over the past year and a half.

Implementation of AI in the company: barriers and solutions

— Despite widespread implementation, not all companies have switched to working with AI in practice. What prevents companies from effectively implementing artificial intelligence in work processes today?

— Under artificial intelligence we often unite a very broad range of technologies. I will focus on AI agents and generative AI.

The first barrier, which is often forgotten, is the need to find and formulate a business case for using AI. It must be specific, digitized, with clear value for the company and clear metrics for executives.

Barrier number two is security. This is the most sensitive topic. Various assessment tools and standards are already emerging, but risks remain. Especially if a company uses open APIs, sends sensitive data, and does not have clear regulations on how to work with AI.

The third barrier is technical. The architecture of multi-agent systems, the complexity of integration, the lack of resources. Not every company has a mature team of developers ready to quickly integrate AI into processes. This is especially true for those who already have their own systems built and need to implement new technologies into them.

The fourth barrier is the lack of data. Even the most advanced models are useless without training material. Data needs to be collected, labeled, verified, stored, and this is a huge job.

Finally, the fifth barrier is the issue of internal culture. And here it is important to note that, according to a Gallup study , only 16% of employees use AI in their daily work, despite the fact that among managers this figure is already 33%. That is, the gap between the levels is colossal. And it is also associated with the lack of a culture of safe and conscious use of AI tools. Therefore, the transformation must go not only from the top, but also deep, through training, involvement, demystification and democratization of technology.

— Today, there is a lot of talk about the democratization of artificial intelligence. How realistic is it to make work with AI and cloud technologies accessible to those who do not have specialized expertise?

— More and more low-code and no-code platforms are appearing on the market, which allow developing agents without programming skills. This is no longer science fiction: you simply set the logic through a visual interface, select tools, set up a sequence — and the agent works.

But complex scenarios still require developers. Especially in large businesses, where systems have been developed for years and contain unique elements. There, open source or a visual designer is not enough – deep integration is needed.

we offer tools for these two audiences. On the one hand, our platform is open to developers who can customize everything for themselves. On the other hand, there are ready-made tools that even a less experienced user can handle. In general, we strive to make the entry threshold ever lower and the interface ever more intuitive.

— What challenges do companies face when integrating AI into existing infrastructure?

— The main challenge today is the lack of computing resources. And this concerns not only Russia, but also the global market. This problem is especially acute due to the growing popularity of language models: they require powerful GPUs, scalable architecture, and stable channels.

We solve this problem both in terms of providing the capacities themselves and through tools that help use them efficiently. A simple example is our Cloud.ru Evolution ML Inference service. It allows you to use GPUs as flexibly as possible: for example, distribute the capacities of one video card into several parts and run several models on it at once. This reduces costs and increases efficiency.

Another challenge is a paradigm shift. Many companies are used to infrastructure as hardware. They come to the cloud for servers, GPUs, storage units. But today the cloud is no longer about hardware, but about intelligent tools. We want both developers and regular users to perceive the cloud as a place where they can simply solve their problem using ready-made and understandable services, without delving into the infrastructure.

And the third challenge is not directly related to technology or business approach. It is about creating a culture of using AI in the company. This requires providing opportunities to use tools, consciously creating challenges, and encouraging the use of AI to solve problems.

Security and horizons of AI development

— How is the approach to ensuring security changing against the backdrop of the active implementation of artificial intelligence? What risks do companies face most often?

— Approaches to security are changing today, and companies are beginning to review regulations at all levels. This concerns internal regulations, work with employees, and technical architecture.

To put it bluntly, security is not just about technology. It is about culture. Statistically, most leaks are not caused by hacks, but by mistakes made by employees themselves. Someone accidentally sent data to an open API, someone saved a sensitive file in the wrong cloud. That is why it is important to educate users — to tell them what they can do with AI, how to work with open interfaces correctly, and what information should never be transferred to third-party services.

Next, technical measures. One of the priorities is to deploy models in a closed circuit, without going beyond the borders of Russia. Especially when it comes to working with personal data. We provide this opportunity: deploy the model locally, access it through a private channel, and be sure that no data will leak.

We also use various methods of protection against prompt injection (a type of attack on large language models, in which an attacker introduces malicious or unexpected instructions into the prompt text to change the model’s behavior. — Ts Trends ), track the behavior of models, and monitor their operation. Without constant analytics and control, it is simply impossible to implement such tools into a corporate infrastructure.

— How has the user experience of interacting with AI tools changed over the past few years? What trends do you see?

— Firstly, voice interfaces. There are already “smart” speakers, voice assistants like “Alice”, “Salut” — almost every home has such a device. But, oddly enough, voice has not yet become a full-fledged entry point into AI. People are not always ready to solve complex problems with their voice. But this is temporary, we are not writing off voice interfaces, they will develop.

The second is maximum simplification of interfaces. The user should not fill out complex forms or understand the cloud structure. The ideal scenario: the agent does everything for you, and you simply confirm the action. Minimum clicks – maximum results. In our products, we strive for exactly this scenario, with maximum automation and ease of use of cloud services.

An important source of insights for us is the young audience. Look at how children use Minecraft, Roblox: they intuitively understand the mechanics. Interaction with the digital environment is natural for them. This is where, I think, new approaches to UI will come from. We must create interfaces in which the user simply formulates the task, and the AI ​​does the rest for him.

In the future, we will need to answer the question: how far can we advance in the symbiotic interaction of humans and AI? On the one hand, technologies offer us the opportunity not only to relieve humans of routine, but also to expand the boundaries of their potential. On the other hand, automation carries risks: the structure of some algorithms is not always clear, questions arise about the balance between responsibility and trust in the machine. The outcome is still impossible to predict – much will depend on how we manage to build interaction taking into account ethics, flexible architecture, and the culture of use. I am convinced that our task is to move in this direction as consciously as possible, experimenting so that technologies truly become an organic and safe extension of humans, and not their competitor.

— What do you think will be the next milestone in AI development and where is the market heading in general?

— I agree with the idea recently voiced by the head of OpenAI, Sam Altman: a real revolution will happen when one person creates a $1 billion business with the help of AI. I think this is quite realistic and we are getting closer to it.

When it comes to technology, I see several big trends.

The first is automation of agent systems. More and more tasks will be performed autonomously, without human intervention. These are not just assistants, they are full-fledged digital employees. And companies are already starting to perceive them as such.

The second is robotics. And not in the sense of humanoid robots, but industrial, production solutions that will perform tasks in a warehouse, at a factory, in logistics. The CEO of Nvidia also talks about this: the reduction in the cost of components, the growth of computing power makes this area extremely promising. And here the demand for cloud services and infrastructure for training AI models at their core will grow.

Another important trend is hybrid platforms. For example, we are developing Evolution Stack — an environment where you can use your own infrastructure and connect to a public cloud if necessary. This is especially important in conditions where there is a shortage of computing resources, but you need to scale quickly. At our GoCloud conference in April, we announced the development of Cloud.ru Evolution Stack AI bundle — a platform for solving AI problems in a hybrid scenario. It simplifies the launch and scaling of AI services in business and lowers the entry threshold for users to develop AI-based solutions.

Of course, we see a trend towards moving from large models to domain-specific Small Language Models . This is a logical step: optimize the use of infrastructure, increase accuracy, reduce costs. SLM is one of those tools that really brings AI closer to business.

And finally, data. The question of accessibility of high-quality data remains open. Amazon and other big players invest in companies that do labeling. Without this, no AI will work. Therefore, data, its protection, local storage are also part of the AI ​​landscape of the future.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *