Category: Future Technology

  • How Artificial Intelligence Is Changing Software Development

    How Artificial Intelligence Is Changing Software Development

    AI is changing the rules of the game in software development: it generates code, automates routine processes, and speeds up product releases. Experts explain what opportunities this opens up for business

    In 2018, research company Gartner predicted that software development teams would begin to use AI en masse in the near future. Experts accurately anticipated the future trend. In 2023, 41% of the code base was written by AI, at Google – more than a quarter . As of 2024, AI has already generated 256 billion lines of software code.

    In May 2024, the international platform Stack Overflow conducted a survey among more than 65 thousand developers around the world. AI is used unevenly in different areas: 82% use it for writing code, 57% for fixing bugs, 40% for creating documentation, 27% for testing, less than 5% for deploying and monitoring application operation. The respondents named documentation (81%), testing (80%) and writing (76%) code as the most promising areas for the near future.

    Scope of AI application

    Modern AI tools change the functionality of each specialist in the development team – analyst, programmer, tester. “The role of the engineer is shifting from routine development to managing the architecture and quality of the project. And artificial intelligence takes on tasks that can be quickly and easily automated,” said Dmitry Medvedev, Director of the Applied Solutions Department at Lanit-Tercom.

    The greatest enthusiasm is caused by the ability to generate working code. “This is an area where you don’t even need to prove anything to anyone, it’s so obvious to everyone that AI speeds up processes,” stated Vladislav Balayev, head of practice at the Lanit Big Data and Artificial Intelligence Competence Center. “On average, by 50%, meaning that a developer can already perform twice as many tasks. Routine operations (writing simple tests or restructuring code) can be almost entirely delegated to generative tools.”

    “If you have a startup and have a clearly defined set of rules and requirements, then, in principle, artificial intelligence can generate a quality project from zero to MVP (minimum viable product. — Ts Trends ),” says Dmitry Medvedev. “In the future, it will certainly be necessary to involve more experienced developers and architects to improve and launch the product into operation.”

    There are a number of AI tools that help the developer. Popular foreign services include Cursor, Windsurf, GitHub Copilot. There are also products – GigaCode, SourceCraft Code Assistant, Kodify.

    At the same time, the scope of AI application in development is diverse, as evidenced, in particular, by a study conducted by Lanit-BPM in 2024. The company said that AI tools can not only write code at the level of a junior developer, but also explain algorithms, generate unit tests, test cases, documentation, decipher recordings of meetings with customers, and answer questions on project documentation.

    Alexander Nozik noted that more and more studies are appearing now showing that the main benefit of AI is in searching for information and solving secondary problems. “For example, programmers really don’t like writing documentation, but language models (not even large ones, but local ones) cope with this very well,” he noted.

    In prototyping, the use of AI reduces the time it takes to create an MVP from several months to weeks or days, Dmitry Medvedev said. In addition, AI can help improve the quality of the code: it analyzes historical data, identifies vulnerabilities, and predicts potential errors, which reduces the number of bugs and increases the reliability of products.

    AI is also being implemented in the work of analysts: companies are experimenting, looking for tasks that can be automated, Vladislav Balayev emphasized. Neural tools can help analysts in recording and summarizing meetings, searching the knowledge base and other routine processes.

    One such tool is Landev AI’s Silicon Assistants platform. It allows you to locally deploy large language models (LLM), including code generation models, and use them in both chat mode and complex document, audio, and image processing pipelines. This allows employees to safely test hypotheses and share ideas within the team.

    For example, the platform can be used at the stage of collecting and analyzing customer requirements, says Vladislav Balayev: “The customer describes his ideas, he is asked questions. And then you need to make a summary from this – and this process is accelerated four times due to AI, and AI can work on several projects at once.” A promising direction is to formalize the result in the form of a ready-made specification, added Alexander Lutai.

    The use of AI has its limitations and disadvantages. It is important to remember that models are trained on open existing code, which may contain vulnerabilities, and, accordingly, reproduce them, warns Alexander Lutai. “AI-generated code is often fragile, it breaks with small changes in the task statement. Solving complex tasks using AI is much more labor-intensive than classical methods,” Alexander Nozik noted.

    Experts agree: AI is useful because it frees employees from performing standard tasks and automates routine work. “Of course, a developer should retain expertise in software development, have a good knowledge of the programming languages ​​used in the project, and be able to write basic constructions,” noted Alexander Lutai. “But if all the code is written manually, it will take too much time. AI tools can act as assistants to the developer: he will have more time for more creative tasks that will add value to the company — improving the product or coming up with a new one, responding to feedback from users.”

    Safety and possible risks

    Neural assistants consist of two parts, explains Alexander Lutai. The first part is a development environment or interface where the AI ​​assistant can be integrated. The second part is the actual large language model, which can be hosted either in the cloud or locally.

    Interaction with the cloud model assumes that some information — a developer’s request, a code base — will go beyond the company’s perimeter. “For some, this is unacceptable. In the case of locally deployed LLMs, this risk is eliminated, but resources are required. A model with a size of 8-14 billion parameters can be deployed on a fairly good computer, for larger models you need to buy a server. This costs money,” noted Alexander Lutai.

    “There is a good phrase: “There are no clouds, there are other people’s computers,” Nikolai Kostrigin reminded. “Of course, for processing official and especially confidential information, it is better to form your own infrastructure, although it is more expensive. For example, in the case of research into the development of secure software, when the processed data potentially contains information about vulnerabilities in the code, at least to guarantee the preservation of the embargo during the period of responsible disclosure.”

    However, it is obvious that public resources are being used and will continue to be used – at least to reduce development costs, the expert added.

    “When you send something outside, you take a risk: the place you send it to can be hacked, your message can be intercepted in the middle. A separate issue is that from the point of view of our country’s security, it is simply impossible to send code to external models, especially in government projects,” Vladislav Balayev emphasized. This creates risks of intellectual property leakage and inclusion of elements in the code that violate license agreements: the generated code may contain a fragment protected by copyright, says Dmitry Medvedev.

    For sensitive code bases in corporations, the use of commercial network large models is usually not considered at all – large companies rely on the deployment of local models, notes Alexander Nozik.

    Implementing AI: Expert Advice

    For entrepreneurs and investors, the increasing spread of AI means a fundamental shift in approaches to creating digital products. “If developers do not learn to operate with large language models, generate code, use certain editors or plugins for this, then they will simply become uncompetitive in the coming months, they will lose momentum,” warns Vladislav Balayev.

    At the same time, experts emphasize: it is important to correctly use the capabilities of AI. “The main danger here is to try to solve all problems with the help of AI. This usually only leads to increased costs,” says Alexander Nozik. For the successful implementation of AI, it is necessary to conduct a study of business processes and find fairly simple tasks that can be entrusted to it, he noted.

    It is very important to have a clear understanding of where artificial intelligence can be used, Dmitry Medvedev noted: “AI will not take on all the tasks. You will still need employees to monitor the results, and you need to clearly define the area where AI will be implemented.”

    Effective use of AI requires the ability to restructure thinking, experts note. “First, you need to understand where the boundaries of the data that can be given to external services are,” advises Alexander Lutai. “Then invest in training employees in the correct communication with models, writing prompts. You can use cloud LLM in those issues where compliance allows it. And thus, specifically for yourself, feel out those areas of application where LLM helps to solve problems faster.”

    The scenarios that have proven effective need to be used to form a knowledge base, the speaker continues: “People will start using them. Because if you simply give access to the model, it will be difficult for most employees to trust this tool and start using it effectively.” And for the data that cannot be given outside, it is necessary to select a suitable LLM, deploy it within the company’s perimeter, and then create more specialized solutions based on it, added Alexander Lutai. In all this work, it is best to seek qualified advice from professionals, experts emphasize.

    Prospects

    Artificial intelligence has already become an integral part of the software development process, changing traditional approaches and increasing the efficiency of teams. “Now this is not just a new trend, but stable and effective work in the product environment,” Dmitry Medvedev noted. “I think the role of AI will only increase in the near future.”

    The future belongs to hybrid solutions, where neural networks complement human skills. “Artificial intelligence is a support tool, not a replacement for the developer’s professional experience,” Dmitry Medvedev emphasized. “AI will not take over all functions. It will help in code generation, in relatively simple tasks. But if the developer, programmer, or employee does not understand what AI has generated, this will very quickly lead to a crisis in the project.”

    “I think that as tools become more widespread and the hype around them subsides, AI will become as much a given as an IDE (integrated development environment. — Ts Trends ) or static code analyzers,” says Alexander Nozik. “Open-source models are gradually catching up with proprietary ones in terms of quality, so the security problem in terms of a closed circuit will also be solved.”

  • Global Microelectronics Market in 2025: Current Status and Trends

    Global Microelectronics Market in 2025: Current Status and Trends

    We study the state of the modern global microelectronics market, key drivers and development prospects, as well as the impact of the main trends in the industry using the example of the development plans of the largest microelectronics manufacturers

    Microelectronics is one of the fundamental industries for the technological development of the economy. The most significant areas of use of microelectronic components are telecommunications, computing, transport, industry, and consumer devices.

    The wide applicability of the industry’s products determines its long-term growth. From 2020 to 2024, the microelectronics market grew at an average annual rate of 9% . At the same time, while demonstrating sustainable expansion in the long term, it is subject to local declines. In 2023, there was a decrease in the volume of microelectronics consumption against the backdrop of a decrease in demand in a number of industries, primarily from consumer devices.

    Dynamics and structure of the global microelectronics market, billion US dollars (Photo: SIA, WSTS, SEMI, analysis by Strategy Partners)
    In 2024, the market recovered and reached $627 billion , with the key growth driver being the increased consumption of video cards and processors for artificial intelligence (AI) used to create data centers (DPCs). The main sectors consuming microelectronics products are computing equipment and telecommunications: they account for 58% of the market .

    Demand-side trends and forecast of microelectronics consumption volume

    In the long term, demand for the industry’s products will grow as digitalization develops and new technologies are applied. As part of our analytical special project for the upcoming Microelectronics forum, it was established that the main factor stimulating the growth of demand for microelectronic components is the active development of a number of key technologies:

    AI;
    cloud computing: a high proportion of computing is moving to the cloud, with the number of servers and data centers growing;
    development of telecommunications: growth in the number of devices and network speeds, modernization of communications infrastructure;
    new technologies in the transport industry: growth of the electric and hybrid vehicle segment, introduction of driverless driving systems and advanced driver assistance systems (ADAS);
    Edge computing: the growth of small data centers located at the edge of the network;
    growing demand for power electronics due to rising energy consumption: general digitalization, increasing number of electric vehicles and data centers;
    Cybersecurity: Complex security methods require more microelectronics for encryption;
    Industry 4.0: automation of production through the introduction of digital equipment and the Industrial Internet of Things (IIoT).
    As a result of the development of these areas , it is predicted that the volume of demand for microelectronics products will grow at a rate of 8% to $1 trillion in 2030. The largest share of the growth of the global microelectronics market will be provided by two industries: computing equipment and telecommunications equipment. The total share of these industries in the consumption structure will grow from 58% in 2024 to 65% in 2030 .

    Supply-side trends

    The microelectronics market is global in nature, with distinct regional specialization in individual production processes. The emergence and deepening of specialization at individual stages of chip creation is due to the capital intensity and complexity of the production process. The microelectronics market is characterized by high consolidation: 56% of its volume is controlled by ten leading companies from the United States, the Republic of Korea, and Germany. According to the results of 2024, the top three leaders in terms of revenue from sales of microelectronics products were Samsung Electronics, Intel, and Nvidia. Nvidia’s capitalization reached $4 trillion in July 2025 , overtaking Apple and Microsoft and becoming the most expensive company in the world.

    The bulk of the supply is provided by a pool of six countries/regions: the United States, the European Union, South Korea, Japan, Taiwan, and China. The United States is a leader in both chip development and production. The development and implementation of new technologies in the field of microelectronics is carried out in the above countries/regions under the auspices of state implementation centers with large-scale state funding.

    Structure of the global microelectronics market by production processes, % of revenue of companies in the segment, % of added value (Photo: BCG, SIA)
    The key trend in the development of the production base in the field of microelectronics is sovereignty. Against the background of a high level of geographical concentration – geopolitical tension (which is expressed in the strengthening of trade restrictions and the intensification of armed conflicts), the possibility of pandemics or natural disasters – there are significant risks of failure of logistics chains. This dictates the need to develop full-cycle production within individual countries/regions. The effect of this trend is manifested in all leading manufacturing countries: they are implementing development programs and introducing market protection mechanisms.

    Currently, the leaders in terms of production capacity are  China, Taiwan and South Korea, the total share of these countries in the global capacity is more than 60%. The largest construction of new factories on the horizon of 2024-2032 is planned in Taiwan, the USA and South Korea, which should ensure an increase in production capacity in these countries within the specified period by 97, 203 and 129%, respectively.

    All development programs implemented by leading countries in the industry cover priority areas of technological development: the transition to smaller topologies and new packaging methods, the use of new materials, the use of photonic components, research in the field of quantum computing, and the creation of new transistor architectures.

    In addition to the leading countries, development programs are being implemented in a number of other countries. The largest investments among them are being made in India. A number of projects are being implemented here to create production facilities in partnership with world industry leaders: Micron, AMD, etc. Against the backdrop of the growing confrontation between the US and China, India may become the “second China” in the field of microelectronics production.

    Microelectronics production facilities

    The market for equipment and technologies in the field of microelectronics remains highly concentrated, with the key players being the US, EU and Japan. These countries account for 96% of global equipment production. The highest concentration is observed in the lithography and photoresist production segments, with individual countries accounting for approximately 90% of production (the leader in lithograph production is Holland, and in photoresist production, Japan).

    As for the relatively small segments in terms of share in the value chain – EDA, IP cores – here the three countries mentioned hold almost 100% of production with a significant predominance of US companies.

    The trend towards sovereignty that is characteristic of the microelectronics industry also covers the equipment and technology market. The most striking example illustrating this trend is China. As of 2022 , mainland China accounted for 20% of global equipment spending and 18% of global equipment imports. Due to export restrictions by the United States, Japan and the Netherlands, there is a need to develop domestically produced alternatives. To ensure technological independence in this area, China invested $25 billion in microelectronics equipment in the first half of 2024, with the total volume estimated for last year at about $50 billion . However, despite the implementation of an ambitious development program in 2012-2024, China is still behind in the technological development race in microelectronics.

    Cases of leading companies

    Key trends in the industry’s development directly affect the plans of manufacturing companies. At the same time, the largest suppliers of microelectronics not only respond to emerging trends, but also set the direction of the industry’s development.

    Samsung, one of the leading manufacturers of microelectronics, focuses on development in the following areas:

    increasing production of DRAM memory intended for AI accelerators;

    expansion of the product line for the automotive industry (the company is implementing a roadmap for the period up to 2027 for the release of eMRAM for use in vehicles);
    Together with another South Korean microelectronics maker, SK Hynix, the company plans to spend more than $470 billion to create a chip manufacturing cluster to achieve technological sovereignty.
    The development directions of another participant in the top 3 microelectronics manufacturers, Intel, are also based on key industry trends:

    focusing on AI in the development of new processors (most of the company’s developments presented at the Consumer Electronics Show (CES) 2025 are aimed at working with advanced language models and increasing the speed of AI-related operations);
    Intel is aggressively building factories in the US under a strategy known as IDM 2.0 , which helps protect against supply chain disruptions.

    NVidia, also one of the top 3 microelectronics manufacturers, presented the company’s plans for the future up to 2028 at the Computex 2025 and CES 2025 exhibitions. The key development areas for the coming years will be:

    The company is building its own data centers. NVidia is moving from supplying equipment to data center operators to designing and building new computing facilities. This is being implemented through partnerships with TSMC, Foxconn, Gigabyte, Asus, and others.
    Industry 4.0 developments: development of robotics and digital twins. An example of a project in this area is the creation of a digital twin of a new car manufacturing plant created by NVidia in partnership with BMW. The digital twin allows for modeling, testing and optimizing production processes before the plant itself is launched. The project aims to reduce the time it takes to create real production, as well as to increase the efficiency of production processes.
    Developments in the field of autonomous transport and ADAS. NVidia is developing a number of projects in partnership with industry leaders: Toyota, General Motors and others.

  • What are embeddings and how do they help AI understand the world better?

    What are embeddings and how do they help AI understand the world better?

    Together with an expert, we figure out how embeddings work, why they have become the basis of intelligent systems, how they are interconnected with cloud technologies, and what significance they have for the infrastructure of the future

    Artificial intelligence can write texts, recognize faces, recommend products, and even predict industrial failures — all thanks to its ability to understand abstract data. At the heart of this ability are embeddings, one of the key tools of machine learning. They allow complex and heterogeneous objects — words, images, products, users — to be translated into digital language that a machine can understand. Without them, AI would be just a set of formulas.

    Why AI Needs ‘Digital Translation’

    Human language is polysemantic and contextual. When we write “lock,” we can accurately determine whether we are talking about a fortress or a device for locking doors, depending on the context. For a machine, this is a challenge: words, images, events, search queries — all of this must be translated into a numerical format in order to compare, analyze, and train models.

    Embeddings are a way to do this translation. A word, image, or other object is represented by a vector — a numerical representation in a multidimensional space, trained on statistical relationships or large language models. These vectors allow the system to determine similarities between objects, build dependencies, and draw conclusions. For example, the embedding of the word “cat” will be closer to “animal” than to “car.”

    An example of a 3D projection of embeddings. Although real LLM embeddings cannot be visualized, the principle remains the same – words that are close in meaning are grouped in space (Photo: GitHab)
    A 2024 study showed that embeddings obtained using GPT‑3.5 Turbo and BERT AI models significantly improve the quality of text clustering. In the tasks of grouping news or reviews by topics, they allowed increasing cluster purity metrics and improving processing accuracy.

    How Embeddings Help AI Understand the World

    Embeddings enable neural networks to find connections between objects that are difficult to specify manually. For example, an online store’s recommendation system can determine that users interested in “hiking backpacks” often buy “hiking water filters” — even if these products are not directly related in the catalog. Embeddings capture statistical dependencies, user behavior, contexts, and even stylistic features of the text. This is the key to creating personalized services and scalable intelligent systems.

    The main task of embeddings is to transform complex data (text, images, behavior) into a set of numbers, in other words, a vector that is convenient for algorithms to work with. Vectors help AI find similarities, understand meaning, and draw conclusions. Moreover, many things can be represented as embeddings: individual words, entire phrases and sentences, images, sounds, and even user behavior.

    Texts and Language (NLP)

    It is important for AI not just to “read” the text, but to understand what is behind it. Embeddings allow models to capture hidden connections between words, to determine that, for example, “cat” is closer to “animal” than to “car”. More complex models can create embeddings not only for words, but also for entire sentences – this helps to more accurately analyze the meaning of phrases, which is important, for example, for chatbots or automatic translation systems.

    Images and visual content

    In computer vision, embeddings allow you to turn an image into a set of features — color, shape, texture, etc. This helps algorithms find similar images, recognize objects, or classify scenes: for example, distinguishing a beach from an office.

    Recommender systems and personalization

    Modern digital platforms create embeddings not only for content (movies, products), but also for the users themselves. This means that each user’s preferences are also represented as a vector. If your vector is close to another person’s vector, the system can offer you similar content. This approach makes recommendations much more accurate.

    How Embeddings Are Created: From Simple to Complex

    Embeddings can be thought of as a multidimensional space where each point is an object (word, image, user). The proximity between points in this space reflects the learned similarity. For example, in Word2Vec (an algorithm that turns words into vectors that reflect their meaning and similarity in meaning), the vectors of the words “king” and “queen” will be close, and their difference will be close to the difference between “man” and “woman”. However, in more modern models (e.g., BERT), vectors depend on the context, and such linear dependencies are weaker.

    There are different ways in which AI translates text, images, or sound into vectors—aka embeddings.

    Classic text models (e.g. Word2Vec or GloVe) create one vector per word. The difficulty is that they do not take into account the context. For example, the word “onion” will mean both a vegetable and a weapon – the model will not understand the difference.
    Modern transformer-based models (BERT, GPT, and others) work differently: they analyze the environment in which the word occurs and create a vector that corresponds to this very meaning. This is how AI understands what kind of “bow” we are talking about – green or rifle.
    For images, embeddings are built differently. Neural networks trained on huge arrays of images “extract” visual features from them: colors, shapes, textures. Each object in the image is also represented by a vector.
    Multimodal embeddings combine data from multiple sources at once — text, images, audio, video — and present them in one common vector space. This allows AI to find connections between different types of data. For example, to recognize that the caption “kitten playing with a ball” refers to a specific moment in a video or a fragment in a photo.
    Embeddings are at the heart of recommendation systems, voice assistants, computer vision, search systems, and many other applications. They allow us to find connections between objects, even if these connections are not explicitly stated.

    More and more often, attention is paid to adapting embeddings to specific tasks. For example, a model can not just “understand what the text is about,” but form a representation specifically for the desired purpose – be it legal analysis, customer support, or medical expertise. Such approaches are called instruction-tuned and domain-specific.

    Where Embeddings Live: Cloud Servers for AI

    Training and using embeddings is a resource-intensive process. Especially when billions of parameters and multimodal data are involved. Such tasks require:

    a large amount of computing power with GPU resources (specialized graphics processors designed for resource-intensive tasks);
    storage of vector databases;
    fast indexing and searching for nearby vectors;
    low latency in response generation, such as in chatbots and search.
    Therefore, the development of embeddings is closely linked to the growing demand for cloud computing and infrastructure optimized for AI workloads. To work with embeddings, businesses need not just virtual machines, but specialized servers with GPU support, high-speed storage, and flexible scalability.

    Such cloud solutions allow you to train and retrain your own models, launch services based on LLM, and integrate AI algorithms into websites, applications, and analytical systems. Cloud servers remove the barrier to entry into AI — businesses do not need to invest in their own cluster, they just need to choose the appropriate configuration for their model or service.

    Today, embeddings are the basis of search, recommendations, content generation and automation. In the coming years, they will become even more complex, individualized and contextual. AI will increasingly recognize the meaning, goals of the user, the context of interaction – and offer relevant answers. According to analysts , the global artificial intelligence market will grow by about 35% per year and will amount to $1.8 billion by 2030.

    But without a reliable infrastructure — fast, scalable, with support for vector databases and GPUs — such systems will be either slow or unavailable. That’s why the development of embeddings and cloud infrastructure go hand in hand: the former provides intelligence, the latter — power and flexibility.

  • AI, Agents, and Hybrid Platforms: How Technologies Are Rebooting the Cloud

    AI, Agents, and Hybrid Platforms: How Technologies Are Rebooting the Cloud

    How cloud technologies are changing under the influence of AI, how AI assistants differ from agents, and what security challenges arise when neural networks are introduced

    How Clouds Are Changing Under the Influence of AI

    — How is the approach to cloud management changing with the development of AI?

    — The approach is changing dramatically, not only to managing cloud infrastructure, but also in general to how people interact with digital systems, interfaces, and applications.

    The classic search engine is fading into the background – it is being replaced by neural network tools, such as, for example, GigaChat, ChatGPT, where the user wants to immediately receive a relevant, human, easy-to-understand answer. And not just find a link, but get a specific personalized result.

    We looked at this trend and created our own solution based on generative AI — the AI ​​assistant “Claudia”. This is not just a chatbot — it is a system that helps automate a large number of tasks related to cloud infrastructure.

    For example, a developer can already give Claudia the task of deploying a virtual machine, setting up a connection, deploying an infrastructure monitoring dashboard, and setting up alerting. Plans call for adding the ability to create a Kubernetes cluster or database through a dialogue with Claudia. Moreover, the solution also helps those who are not involved in IT at all — a conventional marketer or small business owner can quickly build a website or launch a landing page. All that is needed is to describe the task in text, like in a regular messenger. And the necessary actions will be performed by the AI ​​assistant, and not just voice recommendations.

    — In recent years, there has been increasing talk about AI assistants and AI agents. What is the fundamental difference between these entities and how does it manifest itself in cloud platforms?

    — Here’s a simple analogy: a chatbot is a vending machine. It can only perform the actions that are pre-programmed into it. It’s based only on programmed scenarios. If you want to buy a drink, the machine will give you exactly what you put in it.

    An AI assistant, or AI assistant, is already a “big brother”. It will not do everything for you, but will tell you how best to act, what steps to take, what to try. Such an assistant can work with a large array of information and help in decision-making, but the action is up to you.

    But an AI agent is an autonomous system. It sets and executes a task, and can do it without human intervention. The agent initiates actions, accesses the necessary sources, launches processes – for example, it can not only tell you how to book a hotel, but do it itself. It can process a request, enter data into the CRM, select a tour and make a reservation – all in the background. In the cloud, an agent can already deploy a basic infrastructure on its own in the near future, for example, launch a virtual machine or set up monitoring. Here, the AI ​​assistant becomes an access window to a complex digital system – the user sees a communication interface, and under the hood, a full-fledged multi-agent architecture works.

    — What tasks are AI assistants and agents already helping to solve in real business scenarios today?

    — Technologies already cover a whole range of areas — from user support to optimization of complex industrial processes and supply chains. I would name four key scenarios.

    The first is user support. The introduction of agents and assistants has a tangible effect. For example, now every tenth question to Cloud.ru customer support is processed by AI. The plan is to increase the share to 70% without reducing the quality.

    The second scenario is marketing and sales. AI helps marketers cope with tasks related to creativity, “white sheet”, content creation. In our company, the AI ​​assistant “Claudia” promptly generates preliminary calculations for commercial offers directly in chats with customers.

    The third scenario is development. We actively use AI assistants in development teams. In addition, there are solutions like GigaCode that help others implement generative AI in development. This is also a strong trend: tools like GitHub Copilot are no longer exotic, but part of everyday work.

    If we look at it more broadly, then multimodal AI is coming to the forefront globally. An example is a recent Google presentation: a user asks how to fix a bicycle, and the assistant builds a sequence of actions using an image and text. This is an agent system and a multimodal neural network in action. We are also moving in this direction in Russia. More and more companies are testing such scenarios.

    — To what extent does the implementation of AI agents in cloud management today correspond to global practices?

    — We at Cloud.ru have developed our own cloud environment for creating Cloud.ru Evolution AI Factory agents. Our colleagues from other Russian companies are also working in this direction, creating LLM platforms and AI services.

    Most tools are still built on open source technologies. This provides certain advantages for wider application of AI. Firstly, Russian developers take an active part in the development of international open source projects. We make a serious contribution to the development of the global AI ecosystem. And secondly, even relying on open solutions, you can create unique products. Therefore, I believe that our market has made significant progress in the development of AI solutions over the past year and a half.

    Implementation of AI in the company: barriers and solutions

    — Despite widespread implementation, not all companies have switched to working with AI in practice. What prevents companies from effectively implementing artificial intelligence in work processes today?

    — Under artificial intelligence we often unite a very broad range of technologies. I will focus on AI agents and generative AI.

    The first barrier, which is often forgotten, is the need to find and formulate a business case for using AI. It must be specific, digitized, with clear value for the company and clear metrics for executives.

    Barrier number two is security. This is the most sensitive topic. Various assessment tools and standards are already emerging, but risks remain. Especially if a company uses open APIs, sends sensitive data, and does not have clear regulations on how to work with AI.

    The third barrier is technical. The architecture of multi-agent systems, the complexity of integration, the lack of resources. Not every company has a mature team of developers ready to quickly integrate AI into processes. This is especially true for those who already have their own systems built and need to implement new technologies into them.

    The fourth barrier is the lack of data. Even the most advanced models are useless without training material. Data needs to be collected, labeled, verified, stored, and this is a huge job.

    Finally, the fifth barrier is the issue of internal culture. And here it is important to note that, according to a Gallup study , only 16% of employees use AI in their daily work, despite the fact that among managers this figure is already 33%. That is, the gap between the levels is colossal. And it is also associated with the lack of a culture of safe and conscious use of AI tools. Therefore, the transformation must go not only from the top, but also deep, through training, involvement, demystification and democratization of technology.

    — Today, there is a lot of talk about the democratization of artificial intelligence. How realistic is it to make work with AI and cloud technologies accessible to those who do not have specialized expertise?

    — More and more low-code and no-code platforms are appearing on the market, which allow developing agents without programming skills. This is no longer science fiction: you simply set the logic through a visual interface, select tools, set up a sequence — and the agent works.

    But complex scenarios still require developers. Especially in large businesses, where systems have been developed for years and contain unique elements. There, open source or a visual designer is not enough – deep integration is needed.

    we offer tools for these two audiences. On the one hand, our platform is open to developers who can customize everything for themselves. On the other hand, there are ready-made tools that even a less experienced user can handle. In general, we strive to make the entry threshold ever lower and the interface ever more intuitive.

    — What challenges do companies face when integrating AI into existing infrastructure?

    — The main challenge today is the lack of computing resources. And this concerns not only Russia, but also the global market. This problem is especially acute due to the growing popularity of language models: they require powerful GPUs, scalable architecture, and stable channels.

    We solve this problem both in terms of providing the capacities themselves and through tools that help use them efficiently. A simple example is our Cloud.ru Evolution ML Inference service. It allows you to use GPUs as flexibly as possible: for example, distribute the capacities of one video card into several parts and run several models on it at once. This reduces costs and increases efficiency.

    Another challenge is a paradigm shift. Many companies are used to infrastructure as hardware. They come to the cloud for servers, GPUs, storage units. But today the cloud is no longer about hardware, but about intelligent tools. We want both developers and regular users to perceive the cloud as a place where they can simply solve their problem using ready-made and understandable services, without delving into the infrastructure.

    And the third challenge is not directly related to technology or business approach. It is about creating a culture of using AI in the company. This requires providing opportunities to use tools, consciously creating challenges, and encouraging the use of AI to solve problems.

    Security and horizons of AI development

    — How is the approach to ensuring security changing against the backdrop of the active implementation of artificial intelligence? What risks do companies face most often?

    — Approaches to security are changing today, and companies are beginning to review regulations at all levels. This concerns internal regulations, work with employees, and technical architecture.

    To put it bluntly, security is not just about technology. It is about culture. Statistically, most leaks are not caused by hacks, but by mistakes made by employees themselves. Someone accidentally sent data to an open API, someone saved a sensitive file in the wrong cloud. That is why it is important to educate users — to tell them what they can do with AI, how to work with open interfaces correctly, and what information should never be transferred to third-party services.

    Next, technical measures. One of the priorities is to deploy models in a closed circuit, without going beyond the borders of Russia. Especially when it comes to working with personal data. We provide this opportunity: deploy the model locally, access it through a private channel, and be sure that no data will leak.

    We also use various methods of protection against prompt injection (a type of attack on large language models, in which an attacker introduces malicious or unexpected instructions into the prompt text to change the model’s behavior. — Ts Trends ), track the behavior of models, and monitor their operation. Without constant analytics and control, it is simply impossible to implement such tools into a corporate infrastructure.

    — How has the user experience of interacting with AI tools changed over the past few years? What trends do you see?

    — Firstly, voice interfaces. There are already “smart” speakers, voice assistants like “Alice”, “Salut” — almost every home has such a device. But, oddly enough, voice has not yet become a full-fledged entry point into AI. People are not always ready to solve complex problems with their voice. But this is temporary, we are not writing off voice interfaces, they will develop.

    The second is maximum simplification of interfaces. The user should not fill out complex forms or understand the cloud structure. The ideal scenario: the agent does everything for you, and you simply confirm the action. Minimum clicks – maximum results. In our products, we strive for exactly this scenario, with maximum automation and ease of use of cloud services.

    An important source of insights for us is the young audience. Look at how children use Minecraft, Roblox: they intuitively understand the mechanics. Interaction with the digital environment is natural for them. This is where, I think, new approaches to UI will come from. We must create interfaces in which the user simply formulates the task, and the AI ​​does the rest for him.

    In the future, we will need to answer the question: how far can we advance in the symbiotic interaction of humans and AI? On the one hand, technologies offer us the opportunity not only to relieve humans of routine, but also to expand the boundaries of their potential. On the other hand, automation carries risks: the structure of some algorithms is not always clear, questions arise about the balance between responsibility and trust in the machine. The outcome is still impossible to predict – much will depend on how we manage to build interaction taking into account ethics, flexible architecture, and the culture of use. I am convinced that our task is to move in this direction as consciously as possible, experimenting so that technologies truly become an organic and safe extension of humans, and not their competitor.

    — What do you think will be the next milestone in AI development and where is the market heading in general?

    — I agree with the idea recently voiced by the head of OpenAI, Sam Altman: a real revolution will happen when one person creates a $1 billion business with the help of AI. I think this is quite realistic and we are getting closer to it.

    When it comes to technology, I see several big trends.

    The first is automation of agent systems. More and more tasks will be performed autonomously, without human intervention. These are not just assistants, they are full-fledged digital employees. And companies are already starting to perceive them as such.

    The second is robotics. And not in the sense of humanoid robots, but industrial, production solutions that will perform tasks in a warehouse, at a factory, in logistics. The CEO of Nvidia also talks about this: the reduction in the cost of components, the growth of computing power makes this area extremely promising. And here the demand for cloud services and infrastructure for training AI models at their core will grow.

    Another important trend is hybrid platforms. For example, we are developing Evolution Stack — an environment where you can use your own infrastructure and connect to a public cloud if necessary. This is especially important in conditions where there is a shortage of computing resources, but you need to scale quickly. At our GoCloud conference in April, we announced the development of Cloud.ru Evolution Stack AI bundle — a platform for solving AI problems in a hybrid scenario. It simplifies the launch and scaling of AI services in business and lowers the entry threshold for users to develop AI-based solutions.

    Of course, we see a trend towards moving from large models to domain-specific Small Language Models . This is a logical step: optimize the use of infrastructure, increase accuracy, reduce costs. SLM is one of those tools that really brings AI closer to business.

    And finally, data. The question of accessibility of high-quality data remains open. Amazon and other big players invest in companies that do labeling. Without this, no AI will work. Therefore, data, its protection, local storage are also part of the AI ​​landscape of the future.