Author: zulqarnain

  • Hello world!

    Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

  • How Artificial Intelligence Is Changing Software Development

    How Artificial Intelligence Is Changing Software Development

    AI is changing the rules of the game in software development: it generates code, automates routine processes, and speeds up product releases. Experts explain what opportunities this opens up for business

    In 2018, research company Gartner predicted that software development teams would begin to use AI en masse in the near future. Experts accurately anticipated the future trend. In 2023, 41% of the code base was written by AI, at Google – more than a quarter . As of 2024, AI has already generated 256 billion lines of software code.

    In May 2024, the international platform Stack Overflow conducted a survey among more than 65 thousand developers around the world. AI is used unevenly in different areas: 82% use it for writing code, 57% for fixing bugs, 40% for creating documentation, 27% for testing, less than 5% for deploying and monitoring application operation. The respondents named documentation (81%), testing (80%) and writing (76%) code as the most promising areas for the near future.

    Scope of AI application

    Modern AI tools change the functionality of each specialist in the development team – analyst, programmer, tester. “The role of the engineer is shifting from routine development to managing the architecture and quality of the project. And artificial intelligence takes on tasks that can be quickly and easily automated,” said Dmitry Medvedev, Director of the Applied Solutions Department at Lanit-Tercom.

    The greatest enthusiasm is caused by the ability to generate working code. “This is an area where you don’t even need to prove anything to anyone, it’s so obvious to everyone that AI speeds up processes,” stated Vladislav Balayev, head of practice at the Lanit Big Data and Artificial Intelligence Competence Center. “On average, by 50%, meaning that a developer can already perform twice as many tasks. Routine operations (writing simple tests or restructuring code) can be almost entirely delegated to generative tools.”

    “If you have a startup and have a clearly defined set of rules and requirements, then, in principle, artificial intelligence can generate a quality project from zero to MVP (minimum viable product. — Ts Trends ),” says Dmitry Medvedev. “In the future, it will certainly be necessary to involve more experienced developers and architects to improve and launch the product into operation.”

    There are a number of AI tools that help the developer. Popular foreign services include Cursor, Windsurf, GitHub Copilot. There are also products – GigaCode, SourceCraft Code Assistant, Kodify.

    At the same time, the scope of AI application in development is diverse, as evidenced, in particular, by a study conducted by Lanit-BPM in 2024. The company said that AI tools can not only write code at the level of a junior developer, but also explain algorithms, generate unit tests, test cases, documentation, decipher recordings of meetings with customers, and answer questions on project documentation.

    Alexander Nozik noted that more and more studies are appearing now showing that the main benefit of AI is in searching for information and solving secondary problems. “For example, programmers really don’t like writing documentation, but language models (not even large ones, but local ones) cope with this very well,” he noted.

    In prototyping, the use of AI reduces the time it takes to create an MVP from several months to weeks or days, Dmitry Medvedev said. In addition, AI can help improve the quality of the code: it analyzes historical data, identifies vulnerabilities, and predicts potential errors, which reduces the number of bugs and increases the reliability of products.

    AI is also being implemented in the work of analysts: companies are experimenting, looking for tasks that can be automated, Vladislav Balayev emphasized. Neural tools can help analysts in recording and summarizing meetings, searching the knowledge base and other routine processes.

    One such tool is Landev AI’s Silicon Assistants platform. It allows you to locally deploy large language models (LLM), including code generation models, and use them in both chat mode and complex document, audio, and image processing pipelines. This allows employees to safely test hypotheses and share ideas within the team.

    For example, the platform can be used at the stage of collecting and analyzing customer requirements, says Vladislav Balayev: “The customer describes his ideas, he is asked questions. And then you need to make a summary from this – and this process is accelerated four times due to AI, and AI can work on several projects at once.” A promising direction is to formalize the result in the form of a ready-made specification, added Alexander Lutai.

    The use of AI has its limitations and disadvantages. It is important to remember that models are trained on open existing code, which may contain vulnerabilities, and, accordingly, reproduce them, warns Alexander Lutai. “AI-generated code is often fragile, it breaks with small changes in the task statement. Solving complex tasks using AI is much more labor-intensive than classical methods,” Alexander Nozik noted.

    Experts agree: AI is useful because it frees employees from performing standard tasks and automates routine work. “Of course, a developer should retain expertise in software development, have a good knowledge of the programming languages ​​used in the project, and be able to write basic constructions,” noted Alexander Lutai. “But if all the code is written manually, it will take too much time. AI tools can act as assistants to the developer: he will have more time for more creative tasks that will add value to the company — improving the product or coming up with a new one, responding to feedback from users.”

    Safety and possible risks

    Neural assistants consist of two parts, explains Alexander Lutai. The first part is a development environment or interface where the AI ​​assistant can be integrated. The second part is the actual large language model, which can be hosted either in the cloud or locally.

    Interaction with the cloud model assumes that some information — a developer’s request, a code base — will go beyond the company’s perimeter. “For some, this is unacceptable. In the case of locally deployed LLMs, this risk is eliminated, but resources are required. A model with a size of 8-14 billion parameters can be deployed on a fairly good computer, for larger models you need to buy a server. This costs money,” noted Alexander Lutai.

    “There is a good phrase: “There are no clouds, there are other people’s computers,” Nikolai Kostrigin reminded. “Of course, for processing official and especially confidential information, it is better to form your own infrastructure, although it is more expensive. For example, in the case of research into the development of secure software, when the processed data potentially contains information about vulnerabilities in the code, at least to guarantee the preservation of the embargo during the period of responsible disclosure.”

    However, it is obvious that public resources are being used and will continue to be used – at least to reduce development costs, the expert added.

    “When you send something outside, you take a risk: the place you send it to can be hacked, your message can be intercepted in the middle. A separate issue is that from the point of view of our country’s security, it is simply impossible to send code to external models, especially in government projects,” Vladislav Balayev emphasized. This creates risks of intellectual property leakage and inclusion of elements in the code that violate license agreements: the generated code may contain a fragment protected by copyright, says Dmitry Medvedev.

    For sensitive code bases in corporations, the use of commercial network large models is usually not considered at all – large companies rely on the deployment of local models, notes Alexander Nozik.

    Implementing AI: Expert Advice

    For entrepreneurs and investors, the increasing spread of AI means a fundamental shift in approaches to creating digital products. “If developers do not learn to operate with large language models, generate code, use certain editors or plugins for this, then they will simply become uncompetitive in the coming months, they will lose momentum,” warns Vladislav Balayev.

    At the same time, experts emphasize: it is important to correctly use the capabilities of AI. “The main danger here is to try to solve all problems with the help of AI. This usually only leads to increased costs,” says Alexander Nozik. For the successful implementation of AI, it is necessary to conduct a study of business processes and find fairly simple tasks that can be entrusted to it, he noted.

    It is very important to have a clear understanding of where artificial intelligence can be used, Dmitry Medvedev noted: “AI will not take on all the tasks. You will still need employees to monitor the results, and you need to clearly define the area where AI will be implemented.”

    Effective use of AI requires the ability to restructure thinking, experts note. “First, you need to understand where the boundaries of the data that can be given to external services are,” advises Alexander Lutai. “Then invest in training employees in the correct communication with models, writing prompts. You can use cloud LLM in those issues where compliance allows it. And thus, specifically for yourself, feel out those areas of application where LLM helps to solve problems faster.”

    The scenarios that have proven effective need to be used to form a knowledge base, the speaker continues: “People will start using them. Because if you simply give access to the model, it will be difficult for most employees to trust this tool and start using it effectively.” And for the data that cannot be given outside, it is necessary to select a suitable LLM, deploy it within the company’s perimeter, and then create more specialized solutions based on it, added Alexander Lutai. In all this work, it is best to seek qualified advice from professionals, experts emphasize.

    Prospects

    Artificial intelligence has already become an integral part of the software development process, changing traditional approaches and increasing the efficiency of teams. “Now this is not just a new trend, but stable and effective work in the product environment,” Dmitry Medvedev noted. “I think the role of AI will only increase in the near future.”

    The future belongs to hybrid solutions, where neural networks complement human skills. “Artificial intelligence is a support tool, not a replacement for the developer’s professional experience,” Dmitry Medvedev emphasized. “AI will not take over all functions. It will help in code generation, in relatively simple tasks. But if the developer, programmer, or employee does not understand what AI has generated, this will very quickly lead to a crisis in the project.”

    “I think that as tools become more widespread and the hype around them subsides, AI will become as much a given as an IDE (integrated development environment. — Ts Trends ) or static code analyzers,” says Alexander Nozik. “Open-source models are gradually catching up with proprietary ones in terms of quality, so the security problem in terms of a closed circuit will also be solved.”

  • 15 Strategic Tech Trends for 2025 That Will Change the World

    15 Strategic Tech Trends for 2025 That Will Change the World

    The world is changing faster than we can get used to. What was surprising yesterday is now being implemented in hospitals, businesses, stores, and homes. Artificial intelligence doesn’t just generate texts — it heals, edits videos, analyzes risks, and helps designers. And 2025 can definitely be called the year when countries began to build “sovereign AI” and launch a new space economy.

    In this article, we’ll highlight 15 technological changes that are already shaping the new reality. We’ll look at examples from different countries to better understand the scale and direction of the movement. And even if some trends seem distant, they’re already knocking on the doors of business, education, and medicine.

    AI is moving from hype to practical tools

    Just a few years ago, AI was associated with fun: drawing pictures or generating text. In 2025, it is already a full-fledged player – not in the future, but in the present. Businesses are automating routine processes, countries are launching national AI development programs, and technology giants are competing for the palm in creating the most powerful model.

    What has changed?

    • The price of 1 million GPT-4 tokens fell from $36 to $0.25, making AI accessible even to startups.
    • Anthropic’s Claude 3.5 Sonnet model achieves over 76% accuracy on challenging tasks, outperforming even GPT-4o.
    • Alibaba’s China Qwen2.5 is ranked number one in the world for open models on Hugging Face .

    And also…

    • The US, China, Australia, Belgium and Brazil have increased their AI funding by 2-5 times over the past year. For example, in Brazil, investment growth was 471%.
    • Companies like Walmart have already integrated generative AI into daily processes, from updating product catalogs to making personalized recommendations to customers.

    China Develops AI and Challenges the US

    While American companies compete for AI supremacy, China is rapidly closing the gap. And it’s doing so confidently: with government support, billions in investment, and an ambition to create an alternative to Western models.

    Now Chinese products are not just local solutions, but world-class players.

    Qwen and Yi models top the world rankings

    In the open rating of the Hugging Face platform, Chinese models are already on par with GPT-4, Claude and Mistral. The following stand out:

    • Qwen (by Alibaba) is a multilingual, open-source model that is easy to customize;
    • Yi (by 01.AI) is powerful, fast, and already shows great results in reasoning and coding tasks.

    They are no longer just used in China – they are becoming attractive to developers around the world who are looking for an open and competitive alternative to American solutions.

    AI is part of China’s national strategy

    Unlike the West, where AI development is a matter for private companies, in China it is a national issue. The state:

    • allocates resources to AI education and science,
    • actively supports AI startups,
    • oversees development activities related to safety and ethics.

    Trust in AI is the new currency of the future

    Artificial intelligence has learned to generate texts, write code, and even consult. But can we trust it completely? As AI penetrates medicine, education, and public administration, the demand for explanation of decisions and transparency of models is growing.

    The “black box” problem

    Most large models today are “black boxes”: they produce results but do not explain the logic behind them. This creates tension in sensitive areas – when it is not convenience that is at stake, but life or justice. Therefore, AI must not only be intelligent, but also understandable.

    Who is already working on this?

    Key market players are actively seeking solutions:

    • Anthropic in the Claude 3.5 model emphasizes the interpretability of answers by adding step-by-step reasoning.
    • OpenAI is exploring how models form their conclusions – in partnership with the academic community.
    • Google DeepMind is implementing self-checking mechanisms that highlight the logic behind a model’s response.

    These efforts are not just technical progress, but an attempt to turn trust in AI into a systemic value. If you are interested in understanding how to work with AI — and how it can be used in a profession that is the future, we invite you to our upcoming events . Choose an online course that will help you master a modern specialty and confidently move towards new opportunities.

    The Transition to Sovereign AI

    More and more countries are moving from consuming global AI services to creating their own ecosystems. Sovereign AI is not just a matter of digital independence, but a strategic step for security, economy, and development. States want to control data, influence algorithms, and develop local talent.

    Who is already building their own AI systems?

    A number of countries are not waiting for global solutions – they are creating their own:

    • Belgium launches national open AI platform for government agencies.
    • Brazil is investing in its own LLM models for use in education and healthcare.
    • South Korea is developing neural networks in the Korean language with an emphasis on local queries.
    • Italy has opened a state-run AI development center based at a university community.

    These are examples of a new approach: AI as a state-level infrastructure, not just a commercial tool.

    Nvidia is the engine of local ecosystems

    Nvidia has become not only a graphics card manufacturer, but also a key player in the creation of national AI hubs . It provides cloud solutions, servers, and complete tech stacks for local AI deployment. With these tools, countries can build AI independently — and faster.

    Small Open Source Models Are the Weapon of Startups

    Not all companies can afford to use GPT-4 or Claude in full. But the good news is that open-source small language models (SLMs) are becoming a powerful alternative — accessible, flexible, and significantly cheaper . They are opening the door to AI for startups and small businesses.

    Meta, Mistral, Microsoft — Drivers of Openness

    Several players have made open AI a reality:

    • Meta, with the LLaMA 3 model, has created one of the highest quality open-source platforms for local execution.
    • Mistral AI entered the market with compact models that are not inferior to larger ones in many tasks.
    • Microsoft supports the development of open models in its Azure infrastructure, making it easier for teams without their own servers to get started.
    Mistral Releases Free AI Chatbot App

    This is a new round of competition: not only the smartest one wins, but also the most accessible one.

    AI Becomes Economically Achievable

    The price of entry into the world of AI is rapidly decreasing. For example, GPT-4o mini costs only $0.25 per million tokens, which opens up opportunities for prototyping, customization, and scaling even with a minimal budget. Startups can launch their own AI assistants, internal chatbots, content generators – without overpaying for a “big license” . AI has become not only smart, but also financially feasible.

    Spatial Computing – A New Era

    When the digital and physical worlds begin to merge, we talk about spatial computing. These technologies allow you to work with information as if it were “alive” in the space next to you . Already today, these solutions are moving beyond experiments and into everyday life.

    Leaders in spatial technologies

    A group of innovative companies has already formed on the market:

    • Apple Vision Pro sets a new bar for visual experiences in virtual and augmented reality.
    • Meta Quest 3 is actively promoted in the gaming, educational and collaborative spheres.
    • Google Project Starline allows you to communicate via “3D video conferencing” – as if the person were right next to you.
    Meta Quest 3 Advanced VR Headset, 2064x2208 Resolution Per Eye, 512GB Internal Storage, Touch Plus No-Call Controllers, Adjustable Strap, TruTouc Haptic Feedback Buy, Best Price in Oman, Muscat, Salalah

    Where does this already work?

    Spatial computing is changing not only the interface, but also the way of thinking. It is no longer futurism – spatial technologies are used in real tasks:

    • In medicine , for example at Boston Children’s Hospital, to simulate complex surgeries.
    • In architecture and the automotive industry , in particular GM, to visualize projects in real scale.
    • In education , to create inclusive and deep learning experiences.
    • Spatial computing changes not only the interface, but also the way we think.

    Immersive learning is the new standard for companies

    Lecture- and presentation-based training is gradually becoming a thing of the past. It  is being replaced by simulation training, augmented reality, and 3D scenarios that engage employees in the process more deeply, quickly, and interestingly. This is not just a new form of delivery — it is a change in the corporate development paradigm.

    Technologies that transform learning

    Global companies are already implementing immersive tools:

    • Microsoft HoloLens enables training in AR environments: from engineering to medicine.
    • NVIDIA Omniverse models manufacturing and team processes in 3D space.
    • Companies are using VR headsets, simulators and interactive environments to provide training in realistic settings.

    Not only are these formats more effective, they are also more memorable and create a positive impression of the process itself. Immersive technologies are changing not only the approach to learning, but also the way we interact with brands. Read more about this in the article “10 Key Trends That Will Change Digital Marketing in 2025” .

    Master a modern digital profession

    See upcoming events from Genius.Space, choose a convenient program and leave a request for participation

    More than 50% of market leaders are already in the game

    According to analytics, more than half of Fortune 100 companies already use immersive learning in their internal programs. This includes staff training, crisis drills, security training, technical skills, and soft skills. Companies are investing in experiences, not just information — and it’s working.

    Medical Revolution Thanks to AI

    Artificial intelligence is already changing medicine not in theory, but in practice. Algorithms do not simply analyze symptoms — they help identify diseases at early stages, when the chances of recovery are much higher. This applies to the most complex diagnoses: depression, oncology, Alzheimer’s disease.

    Where does this already work?

    These programs do not replace a doctor, but they help make decisions faster, more accurately and more effectively. A number of companies have already brought AI solutions to the market:

    • Lucem Health (USA) – uses AI to analyze medical data and signals about potential risks.
    • Sensely (Switzerland) – virtual medical assistants that interact with patients 24/7.
    • Ubie (Japan and India) – AI platforms for primary diagnostics that are already used in hospitals.

    What’s next?

    AI is gradually being integrated into every stage of medical care, from screening to treatment. Its ability to work with large amounts of data gives doctors more confidence and support in difficult situations. And perhaps in the near future we will not be talking about “medicine of the future,” but about a new norm — with AI nearby.

    Retail and personalization – 1:1 in real time

    Classic mailings and banners no longer work as they used to . Modern buyers want to receive offers that exactly match their interests, desires and the moment. That is why retail is massively moving towards AI personalization in real time.

    How does it work for market leaders?

    Major players have introduced AI assistants into the purchasing process:

    • Walmart uses chatbots and visual assistants that suggest products based on previous purchases.
    • Target creates personalized selections right in the app, responding to requests in real time.
    • Amazon adapts the main page, recommendation blocks, and even prices to each user individually.

    Personalization hurts efficiency

    According to research, personalized recommendations are three times more effective than mass campaigns. Viewers ignore such offers less often, click more often and buy faster. For brands, this is not a trend, but a new standard in customer communication.

    Generative AI in e-commerce

    Text descriptions, recommendations, visuals, chat responses — none of this is written manually anymore. In 2025, generative AI will completely change e-commerce, automating processes that used to take hours. This not only saves time, but also scalable personalization for thousands of products at a time.

    How does this work in real companies?

    E-commerce leaders have already integrated AI into their catalogs, allowing them to respond to trends in real time — without having to hire a full team of marketers.

    • Walmart has updated over 850 million product descriptions using Large Language Models (LLM).
    • Generating name variants , selecting tags, SEO texts – all this is done by a ChatGPT-like assistant.
    • Other companies create visuals, banners , even recommendation carousels using generative models.

    AI as a buyer’s assistant

    Modern online stores do not just sell — they consult, advise, even entertain. AI assistants answer questions, help choose a size, combine things into an image or choose a gift . And all this — in real time, in a dialogue, without human intervention.

    A New Leap Forward in Pharmaceuticals: RNA Therapies

    Just a few years ago, RNA technologies seemed like a distant future. Today, RNAi, mRNA, and ASO are already being actively used to treat diseases that were considered incurable. This is a real breakthrough that is changing the approach to medicine at the cellular level.

    How do RNA therapies work?

    Unlike classic drugs, RNA technologies work at the level of genetic information. They can block “incorrect” proteins, trigger necessary processes, or correct genetic errors. This opens up new ways to treat cancer and rare genetic syndromes.

    Who is already implementing it?

    RNA therapies are becoming the basis of a new generation of pharmaceuticals – more precise, effective and personalized. Leading companies are no longer just testing these technologies – they are already producing real drugs:

    • Alnylam (USA) develops RNAi drugs for the treatment of nervous and cardiovascular diseases.
    • Vico Therapeutics (Netherlands) is working on ASO solutions for genetic disorders.
    • Chinese biotech companies are actively developing next-generation mRNA vaccines.

    Space has become closer

    Until recently, space was exclusively the domain of states and scientific agencies. But in 2025, the situation has changed dramatically – private companies are actively exploring orbit, launching satellites, testing new formats of logistics and communications . Technologies that seemed like science fiction yesterday are becoming business.

    Who dictates the pace?

    Today, the leaders in space launches are not governments, but corporations:

    • SpaceX carries out more than 70% of all US launches.
    • Starlink provides satellite internet to dozens of countries, including remote regions.
    • New startups are creating solutions for cargo transportation, construction in orbit, and deployment of mini-stations.

    The private sector has not only caught up with NASA, it is setting the pace and commercial sense of space exploration.

    New Horizons in Infrastructure

    A separate industry is orbital data centers,  which operate without the need for cooling and with minimal data transfer delays. There are also developments underway for orbital refueling systems and space manufacturing, which will allow components to be created without returning to Earth. Space is becoming not a goal, but a medium for innovation.

    The New Data Center Boom

    AI applications require not only powerful models, but also stable infrastructure.  In 2025, the world is experiencing a real boom in the construction of data centers, which are becoming the backbone of the digital economy. Every request to a language model, every visualization or text generation is gigabytes of data and energy that are processed in real time.

    Why is infrastructure the new priority?

    The emergence of generative AI, spatial computing, and immersive learning has increased the load on servers several times over. This forces companies to invest in new types of data centers — more powerful, more stable, and more environmentally friendly. AI market leaders, cloud platforms, and governments are especially active in this area.

    What is the new wave built on?

    To ensure stable operation of data centers, companies are looking for alternative energy sources:

    • Nuclear energy as a long-term stable solution;
    • Geothermal energy as a green option with a low carbon footprint;
    • Thermonuclear projects as an investment in the energy of the future.

    Liquid cooling and new energy efficiency technologies

    The infrastructure for AI is growing at an incredible rate , and with it, the need for server cooling. Traditional ventilation systems can no longer cope with the load, so companies are switching to liquid cooling. This is not only a more efficient but also a more energy-efficient solution, which is becoming the standard of the future.

    What is liquid cooling?

    Instead of simply blowing air over the servers, the system uses special liquids or water loops that effectively reduce the temperature of the equipment.

    This reduces energy consumption, reduces the risk of overheating and increases server density, while the technology operates quietly and smoothly – ideal for high-load data centers.

    A trend that is becoming the norm

    Analysts predict that by 2026, more than 38% of data centers in the world will switch to liquid cooling . This is due not only to energy savings, but also to the environmental requirements of the market: companies are increasingly reporting their carbon footprint. Green data centers are becoming not just an image element, but a criterion for choosing a partner.

    Local AI hubs in unexpected countries

    Artificial intelligence is no longer just Silicon Valley . In 2025, local AI hubs will appear in countries that were not previously associated with high technology. This is a new stage of AI globalization – when each country gets a chance to create its own intellectual ecosystem.

    Where is growth already happening?

    The most actively developing are:

    • India is creating national AI centers at universities and launching startups with global ambitions.
    • Norway – Invests in green data centers and AI solutions for sustainable development.
    • Brazil and Italy are forming hubs for localized AI products and support for small businesses.

    In the end

    Artificial intelligence is no longer a futuristic dream, but a reality that changes business, medicine, education, and even the idea of everyday things. We live in a time when technology is developing faster than we can adapt. But this is precisely where the greatest opportunity lies: the first to understand the principles of the new era will gain a real advantage. And knowledge about AI today is not a theory, but a real tool for the future.

    If you want to confidently move towards a new profession, we invite you to our upcoming events . There you will be able to learn more about modern trends, communicate with mentors and choose a course that suits you. Start changing your professional life now – the future is being created today.

  • Technology Trends That Will Gain Importance in 2025 and Beyond

    Technology Trends That Will Gain Importance in 2025 and Beyond

    The rapid transformation of technology brings with it new opportunities for the future of businesses and individuals. So, how prepared are you for the technologies that will gain importance in the coming period?

    The technological innovations that will become prominent in 2025 and beyond have the potential to transform nearly every aspect of our lives. In this article, using data compiled from global reports and expert analysis, we’ve examined the technologies that are critical for businesses and employees to gain a competitive advantage and, consequently, prepare for the future.

    The Transformative Power of Artificial Intelligence

    1. The Transformative Power of Artificial Intelligence

    Artificial intelligence (AI) will be at the center of the technology world in the coming years. Autonomous and semi-autonomous systems, particularly along with productive AI , will be used to increase efficiency and reduce costs in businesses. Research predicts that by 2028, at least 15% of daily business decisions will be made autonomously by AI.

    Prominent Areas of Use in 2025:

    Cyber Security: Real-time threat detection and attack prevention.

    Education: Strengthening educational activities with smart learning systems. 

    Software: Assisting and accelerating software development through autonomous systems. 

    Customer Experience: Improving customer experience with automated and personalized customer support solutions. 

    Health: Accelerating disease mapping and drug development. 

    Risks and Challenges Related to Artificial Intelligence:

    AI is not only an opportunity but also a risk that must be carefully managed. The ethical and legal management of AI is likely to be a top priority for organizations in 2025. 
    Disinformation, ethical concerns, and security vulnerabilities highlight the need to keep AI technologies under control. Research suggests that disinformation security will be actively addressed by 50% of businesses by 2028  .

    Quantum Computing: The Key to the Future

    2. Quantum Computing: The Key to the Future

    Quantum computing has the potential to revolutionize many industries by pushing the boundaries of traditional computing. Data shows that at least one-third of businesses will begin investing in quantum computing by the end of 2025. The introduction of Turkey’s first quantum computer demonstrates that local developments in this field are accelerating in parallel with global advancements.

    Major Sectors to Invest in Quantum Computing by the End of 2025:

    – Media, Information, Telecom and Technology

    – Government/Public Sector 

    – Financial Services 

    – Education

    Risks and Challenges Associated with Quantum Computing:

    Quantum computing will create a major paradigm shift in data security and accelerate the development of new cryptographic standards. Quantum computing is powerful enough to threaten existing security infrastructures. According to experts, most traditional asymmetric cryptography* methods will become insecure by 2029. Companies should prepare for these threats with post-quantum cryptography (PQC) solutions and begin making the necessary investments.

    Cryptography: The technique of hiding or encoding data to ensure that only the person who needs to see the information and has the key to break the code can read it.

    3. Robotics and Autonomous Systems

    Robotics will continue to expand its influence in both the physical and digital worlds. Robotic technologies are poised to revolutionize both the manufacturing and service sectors. Multifunctional robots will optimize business processes while providing significant cost advantages.

    By 2025, 37% of technology leaders are considering implementing humanoid robots in operations, while 35% expect humanoid robots to be implemented and 18% expect them to be fully implemented in operations.

    Areas of Use of Robotic and Autonomous Systems:

    Warehousing and Logistics: Picking, packing and transporting goods.

    Health: Patient care, cleanliness, and handling hazardous situations.

    Cyber Security: Autonomous threat detection, response and prevention systems.

    Human-Machine Synergy:

    Robotic and autonomous systems work together with humans to enable more efficient and flexible operations. Experts predict that these systems will become an integral part of daily business operations by 2030. By 2030, 80% of people will interact with intelligent robots on a daily basis. Today, this figure is less than 10%.

    Digital Humans and the Metaverse

    4. Digital Humans and the Metaverse

    Digital humans and augmented reality (AR)-based systems offer new opportunities to transform customer experience and business processes. These technologies will have a growing impact, particularly in the retail and education sectors.

    Digital Humans and the Potential Benefits of Metaverse Technologies:

    E-commerce: Personalized shopping experiences.

    Training: Hands-on learning simulations.

    Business: Efficient meeting platforms for remote working.

    Risks and Challenges of Digital Humans:

    Deepfake  technologies and digital disinformation present significant threats, as well as the opportunities afforded by digital humans. They are being used globally by cybercriminals in fraudulent activities and disinformation campaigns aimed at political influence, particularly regarding elections. As companies and government agencies consider when and how digital humans will play a role in their operations, they must also promptly incorporate new measures into their planning to address the new cyber threats created by these same technologies.

    5. Energy Efficiency and Sustainable Technologies

    As digital transformation continues, energy efficiency is becoming increasingly important, and sustainability is a key priority for businesses. Energy-efficient IT systems both reduce costs and minimize environmental impact. This plays a critical role in reducing the carbon footprint of IT operations.

    Featured Areas of Use

    Sustainable Product Development: Using energy-efficient computing to design products that consume less energy.

    IoT Sensors: Energy efficiency through real-time environmental monitoring.

    Green Data Centers: Operational and environmental efficiency with lower energy consumption and cost advantages.

    Hybrid Computer: Combining Power and Flexibility

    6. Hybrid Computer: Combining Power and Flexibility

    Hybrid computing stands out as a system that combines different computing technologies to address the complex computing needs faced by modern businesses and individuals. Technologies such as CPU (Central Processing Unit), GPU (Graphics Processing Unit), ASIC (Application-Specific Integrated Circuit), neuromorphic, quantum computing, and photonic systems are combined under the umbrella of hybrid computing, providing both security and flexibility.

    According to research, hybrid computer systems will play a critical role in many sectors in 2025 and beyond with their capacity to solve complex computational problems.

    Areas of Use of Hybrid Computers

    Business and Enterprise Solutions: Hybrid systems provide businesses with a competitive advantage by accelerating big data analysis. They also increase data security with cloud-based backup solutions.

    Healthcare: Offers high speed and precision in genetic research and treatment modeling. Enables secure storage and analysis of patient data.

    Education: Hybrid systems integrated with quantum computers are used in scientific computations.

    Finance and Banking: Hybrid computing is used to model financial risks and markets. Fast and accurate analysis increases the effectiveness of credit processes.

    Spatial Computing: The Interaction of the Digital and Physical Worlds

    7. Spatial Computing: The Interaction of the Digital and Physical World

    Spatial computing is a technology trend that takes on a new dimension by integrating digital content into the physical world. Combined with advancements like augmented reality (AR), mixed reality (MR), and artificial intelligence (AI), it allows users to interact with the real world and the digital world.

    Areas of Use of Spatial Computing:

    Business: Remote teams can gather in 3D virtual meeting rooms to achieve more interactive and effective work. Companies can create digital copies of physical assets to track performance, predict maintenance needs, and run simulations.

    Education: With mixed reality-enabled simulations, students and staff can experience real-life scenarios risk-free. 

    Retail and e-commerce: Customers can test products they want to buy in their own environments using spatial information technologies. AR-based guidance systems in stores can provide customers with product information and simplify the shopping process.

    Healthcare Services: Spatial information systems that monitor the condition of patients in real time without the need for wearable devices enable healthcare personnel to intervene more quickly.

    Gaming and Entertainment: Spatial computing transforms physical environments into a digital playground, providing the user with a unique experience.

    8. Invisible Intelligence: The Unnoticed Power of Technology

    Ambient stealth intelligence is a technology solution where small, low-cost sensors and tags embedded in everyday objects form an ecosystem that operates discreetly. These systems monitor and analyze environmental conditions, providing users with more efficient, personalized, and sustainable living spaces.

    Areas of Use of Invisible Intelligence

    Retail and e-commerce: Sensors in stores can analyze customer movements to provide personalized recommendations. Automatic reorder systems can be used to prevent out-of-stock items from shelves.

    Smart Buildings and Cities: IoT sensors can analyze energy consumption in buildings in real time, optimizing lighting and heating systems. Traffic, waste management, and air quality can be monitored to create more livable cities.

    Industrial Applications: Sensors can increase the efficiency of production lines by continuously monitoring the operating status and performance of machines. They can also ensure quality by continuously monitoring product temperature, humidity, and location during transport.

    Office Management: Office occupancy rates can be monitored to ensure efficient use of workspaces. Lighting, temperature, and air quality can be automatically optimized for employee comfort.

    9. Disinformation Security: A Critical Line of Defense in the Digital Age

    Disinformation is rapidly spreading as one of the greatest threats of the digital age. Misleading information not only impacts individuals but can also target businesses, governments, and societies, causing widespread harm. Research shows that disinformation security has become a priority for businesses to rebuild trust in information in the digital world.

    Disinformation Security in the Future

    Technological advancements: With artificial intelligence and machine learning, disinformation detection systems will become faster and more effective. Blockchain technology will also be used to verify the source of information.

    Systems Becoming Mandatory for Businesses: According to research company Gartner, by 2028, 50% of businesses will be using specialized products and services for disinformation security.

    New Threats and Solutions: The proliferation of deepfake technology will necessitate the development of more sophisticated detection and verification tools.

    Neurodevelopment: The Technology That Redefines Humanity

    10. Neurological Development: The Technology That Redefines Humanity

    Neuroenhancement is a branch of science comprised of technologies that read, analyze, and, if necessary, write back brain activity to enhance human cognitive capacity. These technologies have the potential to radically transform learning, decision-making, and productivity processes by enhancing individuals’ capabilities through brain-machine interfaces (BMI). Research in this area predicts that neuroenhancement will revolutionize healthcare, education, and business, in particular.

    Advantages of Neurological Development

    A Revolution in Healthcare: It can be used to treat brain injury, stroke, and other neurological diseases. Simulations based on brain activity can help surgeons specialize more quickly and safely. It can also be used to create personalized treatment programs to restore motor skills.

    Personalization in Education: Personalized educational materials can be presented based on students’ brain activity. Neurological data-backed teaching methods can be developed to enhance the retention of learned information.

    Increased Performance in Business: Technologies that increase the brain’s focus can lead to more efficient work processes. Neurological analyses can increase team harmony and employee productivity.

    Personal Development and Well-Being: Can help manage issues such as stress, anxiety, and depression. Can offer programs that improve individuals’ memory and problem-solving skills.

    5G Private Networks and Industrial Automation

    11. 5G Private Networks and Industrial Automation

    5G private networks offer businesses a dedicated communications infrastructure that supports high bandwidth, low latency, and high device density. This infrastructure will play a critical role, particularly in the success of industrial automation. 

    Main areas of use:

    – Ultra-low latency communication enabling synchronous operation of industrial robots.

    – Digitalization and real-time monitoring of production lines.

    – Effective operation of autonomous transport systems and storage robots.

    – Optimization of inventory management with IoT sensors.

    – Smart Cities and Public Services

    – Real-time management of traffic lights, waste management and energy systems.

    – 5G-based video surveillance and high-speed communication solutions in public safety.

    One of the key uses of 5G private networks is industrial automation , enabling production processes to be more efficient, faster, and error-free. Automation solutions powered by 5G technology will accelerate digital transformation, and the leading technologies in this area include:

    • Digital Twins: 

    5G enables real-time updates to digital copies of production facilities and processes, allowing for quick identification and resolution of issues.

    • IoT and Sensor-Based Manufacturing:

    Optimizing production processes with data collected from IoT devices and sensors via the 5G network.

    • Autonomous Robots and Vehicles:

    Synchronized operation of robotic systems and autonomous vehicles in industrial areas with high-speed communication.

    Macro Security, Cyber Anomaly Detection, and Deep Packet Inspection

    12. Macro Security, Cyber Anomaly Detection, and Deep Packet Inspection

    In an era of rapidly evolving cyber threats, it is critical for organizations to strengthen their security infrastructure with proactive and innovative approaches. Macrosecurity, cyber anomaly detection, and deep packet inspection technologies enable organizations to combat these threats and ensure data integrity. Macrosecurity provides a comprehensive security approach across networks, devices, and applications. Organizations need to assess cyber threats comprehensively and provide solutions that encompass the entire infrastructure.

    Cyber Anomaly Detection

    Cyber anomaly detection plays a critical role in preventing cyberattacks by identifying abnormal activity in network traffic and user behavior. The following strategies can be implemented in this context:

    • Behavioral Analysis:

    Development of algorithms that analyze user behavior and detect deviations from normal.

    Identifying insider threats and unknown threats.

    • Detection with Machine Learning:

    Improving anomaly detection accuracy with machine learning-based models.

    Rapid adaptation to new types of threats with continuously learning systems.

    • Real-Time Monitoring:

    Real-time monitoring of all network activities and immediate notification of threats.

    In addition to these measures, security measures must be implemented at the package level for corporate data flows. In this context, the following points are of particular importance:

    – Analysis of data packets in network traffic at the content level.

    – Detection of malware, malicious content and data breaches.

    Application Layer Monitoring:

    – Monitoring application layer data and providing control over sensitive information with DPI technology.

    – Encrypted Traffic Analysis: Development of technologies that can securely analyze SSL/TLS encrypted traffic.

     

    Turn Technology into Opportunity

    The technological advancements and digital transformation of 2025 and beyond present both significant opportunities and significant challenges for businesses. At Karel, we stand by you to optimize your business processes and gain competitive advantage by leveraging the opportunities offered by technology. By taking the right steps in your technology-focused strategies with Karel solutions, you can build the future today.

  • Science Through the Lens: 11 Photos Taken by Scientists

    Science Through the Lens: 11 Photos Taken by Scientists

    We have collected winning shots from international photography awards that show how photography helps scientists study complex natural phenomena

    Nikon Small World

    Nikon Small World is one of the most prestigious photomicrography competitions , established by Nikon in 1975. The uniqueness of the award is that the jury evaluates the works from both a scientific and an aesthetic point of view, thus creating a unique bridge between science and art.

    Mouse brain tumor cells

    Differentiated mouse brain tumor cells (Photo: Bruno Cisterne with assistance from Eric Vitriol/Nikon Small World)
    The photo shows mouse brain tumor cells, clearly showing the actin cytoskeleton (thin filaments that support the cell’s shape and facilitate its movement), microtubules (structures through which various substances move) and nuclei that store the cell’s genetic material. The author of the image is Dr. Bruno Cisterne in collaboration with Dr. Eric Vitriol of Augusta University, USA.

    Scientists are studying how abnormalities in the cytoskeleton can lead to diseases such as Alzheimer’s and amyotrophic lateral sclerosis. “One of the key challenges in studying neurodegenerative diseases is that their causes are not fully understood,” explains Cisterne. “In order to develop effective treatments, we first need to understand the underlying mechanisms of these diseases. Our study plays an important role in finding this knowledge and opens up prospects for the development of new drugs.”

    The image was taken at 100x magnification and was the winner of the Nikon Small World 2024 competition.

    The optic nerve of a rodent

    The winner of the 2023 Nikon Small World competition is an image of a rodent’s optic nerve taken by Hassanain Kambari and Jaden Dixon of the Lions Eye Institute in Australia. The image, taken at 63x magnification, shows a section of the rodent’s optic nerve head, the point where the optic nerve exits the eyeball.

    Special fluorescent dyes are used to visualize collagen fibers (green), nerve cells (red) and the cell nucleus (blue). This image plays a role in the study and treatment of diabetic retinopathy, a retinal damage that occurs with diabetes.

    “The visual system is a complex and highly specialized organ, and even relatively minor disturbances in retinal blood flow can lead to catastrophic vision loss. I decided to enter the competition to demonstrate the complexity of retinal microcirculation,” Kambari shares.

    Gecko embryo foot

    The photo shows the paw of a large Madagascar gecko embryo, its hand is only 3 mm long. The authors of the photo are scientists from the University of Geneva, Dr. Grigory Timin performed the work under the supervision of Dr. Michel Milinkovic. The image is artificially colored: the nerves are highlighted in blue, and the bones, tendons, ligaments, skin and blood cells are in warmer colors.

    The shooting method used was confocal microscopy, which allows obtaining clear and high-contrast images by using point lighting and eliminating scattered light. In total, several hundred detailed images were taken, which were then combined into a single image. The entire process took about two days and required almost 200 GB of data.

    The photo was the winner of the Nikon Small World 2022 competition.

    Welcome Photography

    The Wellcome Photography Prize is a photography award organised by the Wellcome Foundation, a British foundation dedicated to health research. The foundation was founded in 1936 with money left as a legacy by pharmaceutical magnate Henry Wellcome.

    The competition includes several nominations, one of which is “Wonders of Scientific and Medical Visualization”.

    Cholesterol in the liver

    The 2025 winner in the Wonders of Scientific and Medical Imaging category is a photograph of cholesterol in the liver. The image shows cholesterol crystals (blue) inside a liver cell (purple). When cholesterol changes from a liquid to a crystalline state, it can build up in blood vessels and damage them, leading to heart attacks and strokes.

    Scientific photographer Steve Gschmeissner created the image using an electron microscope, which allows one to see tiny structures with very high resolution.

    Triatomine bugs

    Pictured is a triatomine bug, common in Latin America. The bite of this insect is dangerous to humans – it is a carrier of Chagas disease.

    Chagas disease can lead to serious heart and digestive problems, especially if left untreated. It most often affects low-income people in rural areas of Latin America.

    The photograph was taken using a cryoscanning electron microscope, and thanks to artificial coloring, individual cellular structures can be clearly seen.

    Photo by Ingrid Augusto, Kildare Rocha de Miranda and Vania da Silva Vieira, researchers from Brazil who study the triatomine bug and hope their work will lead to a better understanding of how to combat Chagas disease.

    Air Pollution

    Polluted air on Brixton Road in London (Photo: Marina Vitaglione/Wellcome Photography Prize 2025)
    The photograph shows fine particles, the most common air pollutant. Artist Marina Vitaglione, together with scientists from Imperial College London, collected air samples from different areas of the city. Vitaglione photographed these samples under a microscope and created prints using hand-made analogue printing with the addition of iron salts. This is why the colour of the photographs acquired a blue tint.

    The photograph shows specimens collected from Brixton Road in south London.

    “Pollution levels in central London have fallen over the past four years, but a quarter of roads still exceed legal limits for nitrogen dioxide (mostly from diesel engines) and millions of Londoners continue to breathe polluted air. This work aims to visualise the ‘invisible killer’,” says Vitaglione.

    Royal Society Publishing Photography Competition

    The Royal Society is a British scientific organization founded in 1660. Its mission  is to advance science and disseminate scientific knowledge. One of the Royal Society’s modern initiatives is the annual scientific photography competition.

    Shark attack

    ‘Hunting from Above’: A school of fish surrounded by sharks (Photo: Angela Albie. Drone pilot August Paula/Royal Society Publishing Photography Competition)
    The shark photo is called “Hunting from Above.” It shows a large school of small fish confronting four young blacktip reef sharks. The image was taken by a drone off the coast of the Maldives.

    This photo is part of a scientific study by biologist Angela Albi from the Max Planck Institute in Germany. Albi studies the interactions between blacktip reef sharks and schools of fish in the Maldives. “In this image, the shark on the left suddenly goes from swimming calmly in a school to starting to hunt. It stands out because of its posture,” Albi says . Biologists study photos and videos to understand how sharks hunt and how other fish react to them.

    The photograph won first place in the Behaviour category and overall the 2024 Royal Society Publishing Photography Competition.

    Constellation Cassiopeia

    The Heart and Soul are two nebulae located in the constellation Cassiopeia, approximately 6,000 to 7,000 light years from Earth. The author of the photograph is Imran Sultan, a research fellow at Northwestern University in the United States. He spent almost 14 hours photographing these nebulae to capture their details and features. The photo won in the Astronomy category.

    “My astrophotography has given me many opportunities to make astronomy more accessible to a wider audience. The fact that astronomers observed these two nebulae and saw a ‘heart’ and a ‘soul’ in them highlights the human element in astronomy,” Imran says .

    Common toads during the mating season

    The photograph shows common toads during the breeding season, gathered in large numbers in shallow water.

    “This photo was taken in the spring, when I was collecting eggs for experiments with my research team,” says the author of the photo, Ovidiu Dragan, a PhD student at Ovidius University in Constanta. “The whole area was literally overflowing with toads desperately trying to mate. What is especially interesting is that the second toad from the top is a male green toad, not a common toad. He was trying to mate with another species with which they coexist in their natural habitat. This behavior in the mountains was a big surprise for us.”

    The photo won second place in the Behavior category. The photo was taken with a phone.

    #ScientistAtWork

    #ScientistAtWork is an annual international photo contest organized by Nature magazine. Its goal is to showcase scientists at work, whether in a lab, a park, the taiga, fjords, or even Antarctica. The organizers encourage researchers to share photos of their workdays and promise cash prizes to the winners.

    Biologists in northern Norway

    The winner of Nature’s Scientist at Work photo contest in 2025 is a photo by biologist Audun Rikardsen of his work in a fjord in northern Norway. A team of scientists from the University of Tromsø monitors the migration of herring, which attracts killer whales and humpback whales. They tag the whales with satellite tags, which are deployed with an air gun, to track their movements.

    The tags collect data on the whales’ locations, as well as recording dive parameters, duration and depth. Scientists also often perform biopsies, taking tissue samples to monitor the whales’ health.

    The work keeps researchers in close proximity to the animals. “You can actually feel their [the whales’] breathing,” says biologist Emma Vogel, who took the photo. “And you hear them before you see them, which is always amazing.”

    Defenders of Frogs

    This photo is another ScientistAtWork winner. It shows ecologist Kate Belleville of the California Department of Fish and Wildlife holding baby frogs.

    In the Lassen National Forest in northern California, a team of scientists and volunteers capture young frogs to bathe them in an antifungal solution. The solution kills the chytrid fungus that is causing mass die-offs of amphibians around the world. After the treatment, the frogs are released back into the wild.

    The froglets are also implanted with elastomer tags with a unique combination of colors. These tags form a code that glows under ultraviolet light.

    As the researchers note, it is extremely difficult to notice the small frogs – if you do not know what you are looking for, they can easily be mistaken for scurrying crickets. Therefore, the work requires special caution and attention.

  • Space as a Business: Does Mining Metals in Orbit Make Economic Sense?

    Space as a Business: Does Mining Metals in Orbit Make Economic Sense?

    We tell you whether flights to asteroids to extract palladium, iridium, niobium and other metals in outer space can pay off and how this market works

    Mining asteroids has been described many times in science fiction, from the novel by Soviet writer Alexander Belyaev “Star KEC” to the TV series “Space”. Such a step has long seemed natural and logical. By 2025, science and business have reached the level where science fiction can become reality.

    Together with experts, we figure out whether mass mining of minerals on asteroids is really possible and what volumes this market can reach.

    Are asteroids really a treasure trove of resources?

    Humanity is accustomed to the fact that resources lie deep beneath the Earth’s surface, but not everyone thinks about how exactly they got there. Many years ago, fragments of asteroids fell on the planet during meteorite showers. At the same time, the asteroids themselves – celestial bodies orbiting the Sun – consist of silicate minerals and carbon compounds. It is due to their composition that vast deposits of minerals have accumulated on Earth.

    It is important that the huge volumes by the standards of the planet are only small parts of asteroids. More than a million celestial bodies have been discovered to date, their size varies from tens of meters to thousands of kilometers. All this makes space and asteroids in particular a virtually inexhaustible source of minerals.

    As explained to RBK Trends at the Russian State Geological Prospecting University named after Sergo Ordzhonikidze (MGRI), asteroids are divided into three main types, and each contains valuable resources. Thus, there are metallic asteroids containing iron, nickel, cobalt, gold, rhodium, and platinum group metals, namely iridium, ruthenium, and platinum. A striking example of such an asteroid is the famous Psyche, one of the largest asteroids discovered by mankind.

    In addition, there are carbonaceous asteroids – sources of water and rare earth elements. The most famous of them are Bennu and Ryugu, where the Japanese spacecraft Hayabusa-2 landed several years ago . The third type is stone, containing iron, nickel, cobalt, tellurium, for example the asteroid Itokawa.

    What are the actual stocks and needs?

    The finiteness of resources on the planet is obvious: the number of people is growing along with the volume of resource consumption, which means that sooner or later they will run out. However, is the situation with minerals on Earth so critical? Experts have different opinions on this matter.

    The expert also cited the transport sector as an example. According to him, in order to build 12-15 million electric vehicles on hydrogen fuel cells, it will be necessary to increase the volume of platinum production by half. At the same time, on just one asteroid (6178) 1986 DA, which rotates in an orbit close to the Earth, the reserves of platinum group metals are three times higher than in the Earth’s crust.

    “We still perceive space as a kind of frontier, where devices fly to do something important for the Earth and sometimes make forays beyond the Earth’s orbit. However, if we look 20 years into the future, we will see a huge, vibrant industry with active missions beyond orbit to the nearest planets, and perhaps even further,” explained Evgeny Kuznetsov.

    Stepan Ustinov, head of the basic department of methods for studying ore deposits at MGRI and candidate of geological and mineralogical sciences, shared his own forecasts for the depletion of the Earth’s resources with Ts Trends. According to him, data from the Russian Federal Agency for Subsoil Use and the US Geological Survey indicate that explored reserves of rare metals, at current consumption, will last humanity 50-100 years. At the same time, easily accessible deposits are gradually being depleted, which forces a transition to the development of poorer ores. This, in turn, increases the cost of extraction.

    MGRI believes that humanity will have no choice but to extract minerals from asteroids if three factors combine. The first is a sharp increase in demand, for example as a result of a mass transition to thermonuclear energy, which requires niobium-based superconductors. The second is the accelerated depletion of easily accessible deposits. The third is a breakthrough in space technology, thanks to which the delivery of resources from asteroids will become 10-100 times cheaper.

    At the same time, rare metals on Earth will not run out in the next 20–30 years, Ustinov is confident: “Asteroid mining, purely hypothetically, under the most pessimistic forecasts, could become a forced measure only after 2050.”

    Mining in Space: Pros and Cons

    Experts see both advantages and significant disadvantages in developing asteroid deposits. The advantages often include almost unlimited reserves. In particular, metallic asteroids contain platinum group metal concentrations 10-100 times higher than terrestrial ores. And the Psyche asteroid, according to preliminary estimates, contains metal reserves worth $100,000 quadrillion.

    The second advantage is the ability to obtain rare metals, such as osmium and rhodium, which are almost never found on Earth. In Russia, they are obtained at Norilsk Nickel plants as a by-product of Norilsk platinum processing. There are no other sources of osmium and rhodium in the country.

    The advantages also include the environmental component. “There are no environmental restrictions for the extraction of minerals on asteroids, since there is no biosphere on space bodies, so any extraction methods can be used: explosive, thermochemical,” explained Stepan Ustinov from MGRI.

    Researchers believe that the main disadvantage and limitation is the high cost of mining in space.

    “The cost of sending 1 kg of cargo into space is $2-10 thousand. For comparison, mining 1 kg of platinum in Russia costs an average of $20-30 thousand, but it can be sold for $30-40 thousand. Asteroid mining will require billions of dollars in preliminary investments,” the Ordzhonikidze Russian State Geological Prospecting University calculated.

    Scientists call the lack of autonomous mining technologies an insurmountable step towards asteroid development. “On Earth, mining, crushing, and enrichment are proven processes. In space, there is no gravity for classical flotation, no atmosphere for smelting, and no fully robotic mines,” Stepan Ustinov explained.

    How much money is needed for orbital mining

    The key issue in the development of the space mining market remains the return on investment. Futurologists are confident that when the sphere has matured, minerals from space will become a gold mine for investors. But at the initial stages, this will require huge investments with a long payback period.

    Evgeny Kuznetsov cited data according to which the extraction of “water as fuel” on asteroids can bring 12-18% of the internal rate of return (IRR). “The profit will grow as the demand for “space fuel” and the scale of production grow. By 2050, this market segment could grow to $0.1-1 trillion, and its control will become key for the further development of humanity’s space expansion,” Kuznetsov told Ts Trends.

    Vladimir Komlev, Doctor of Technical Sciences, Professor of the Academy of Sciences, Director of the Baikov Institute of Metallurgy and Materials Science, holds a more conservative opinion. In particular, because today it is practically impossible to estimate how much more expensive or cheaper mining on asteroids will be compared to developing deposits on Earth.

    “It is not possible to conduct a comparative analysis of the costs of extracting solid minerals on Earth and in space. Extraction includes many factors, including scientific, technical, technological and logistical ones. It is more likely that extracting minerals on asteroids is an extremely expensive pleasure and is not feasible at the present time,” the scientist believes.

    Is it possible to set up mining in space?

    Experts do not have a clear opinion on the prospects for asteroid mining: both positive and extremely negative forecasts are common. At the same time, it is impossible to deny the fact that space companies are striving to develop space resources. For these purposes, they organize expensive missions and conduct experiments on Earth.

    According to the futurologist, humanity should be thinking about developing deposits in space already now: “When they were discussing possible oil and other deposits in the Arctic about 100 years ago, they also thought about why this was necessary, because there is warm Baku and convenient Persia. However, over time it turned out that these resources are not enough for humanity,” Kuznetsov recalled. “As at the start of Columbus’s journey, it is difficult to calculate specific revenue, but in the end we know how much humanity has advanced thanks to ocean voyages. Space investors now have roughly the same thing in their heads.”

    Oleg Mansurov also holds a positive view. Although there is not a single example of a paid-off investment in asteroid mining to date, the CEO of SR Space calls the market a “blue ocean” in which pioneers will be able to make super profits.

    “This may happen as early as the 2030s, but for this to happen, we now need to invest in reusable rockets and nuclear reactors for use in space. We have a reserve in these areas in our country, and becoming a leader in supplying the world market with resources mined in space is a very prestigious and significant geopolitical role,” Manusrov is confident.

    Stepan Ustinov calls the extraction of minerals on asteroids technically possible, but not yet economically feasible: “If this happens, it will be a long time coming, and the first to be extracted will be water and platinum group metals — they are needed for space missions. Unfortunately, Russia is lagging behind in this race: there are no private startups or state programs for asteroid mining. There is no doubt that the future belongs to those who are the first to make space mining profitable. For now, the leaders are the United States and China, but if breakthrough technologies appear, everything could change dramatically. The main conditions for this will be a reduction in launch costs, automation of mining, and demand for raw materials in space for the construction of stationary extraterrestrial bases.”

    Mining and metallurgical companies on Earth can also change the conditions of the space race. And not only by investing in space development, but also by their current expertise. If Stepan Ustinov’s forecast comes true and platinum group metals really start to be mined on asteroids, then Norilsk Nickel’s technologies and competencies can have a critically important impact on Russia’s development as one of the leaders in asteroid mining, since the company is the world’s largest producer of palladium.

    Thus, according to the results of 2024, Norilsk Nickel produced 2.8 million ounces of palladium, which is 3% more than the result of 2023, as well as 667 thousand ounces of platinum (+0.5% compared to 2023). It is important that the actual volumes exceeded the forecast for 2024.

    Vladimir Komlev also counts on the success of mining on Earth: “It can be assumed that with a high degree of probability, the extraction of minerals will be implemented on asteroids in the distant future. Why not? However, at present, this issue can be classified as science fiction, which is due to the large number of scientific and technical problems that require solutions, including logistical ones.”

  • Global Microelectronics Market in 2025: Current Status and Trends

    Global Microelectronics Market in 2025: Current Status and Trends

    We study the state of the modern global microelectronics market, key drivers and development prospects, as well as the impact of the main trends in the industry using the example of the development plans of the largest microelectronics manufacturers

    Microelectronics is one of the fundamental industries for the technological development of the economy. The most significant areas of use of microelectronic components are telecommunications, computing, transport, industry, and consumer devices.

    The wide applicability of the industry’s products determines its long-term growth. From 2020 to 2024, the microelectronics market grew at an average annual rate of 9% . At the same time, while demonstrating sustainable expansion in the long term, it is subject to local declines. In 2023, there was a decrease in the volume of microelectronics consumption against the backdrop of a decrease in demand in a number of industries, primarily from consumer devices.

    Dynamics and structure of the global microelectronics market, billion US dollars (Photo: SIA, WSTS, SEMI, analysis by Strategy Partners)
    In 2024, the market recovered and reached $627 billion , with the key growth driver being the increased consumption of video cards and processors for artificial intelligence (AI) used to create data centers (DPCs). The main sectors consuming microelectronics products are computing equipment and telecommunications: they account for 58% of the market .

    Demand-side trends and forecast of microelectronics consumption volume

    In the long term, demand for the industry’s products will grow as digitalization develops and new technologies are applied. As part of our analytical special project for the upcoming Microelectronics forum, it was established that the main factor stimulating the growth of demand for microelectronic components is the active development of a number of key technologies:

    AI;
    cloud computing: a high proportion of computing is moving to the cloud, with the number of servers and data centers growing;
    development of telecommunications: growth in the number of devices and network speeds, modernization of communications infrastructure;
    new technologies in the transport industry: growth of the electric and hybrid vehicle segment, introduction of driverless driving systems and advanced driver assistance systems (ADAS);
    Edge computing: the growth of small data centers located at the edge of the network;
    growing demand for power electronics due to rising energy consumption: general digitalization, increasing number of electric vehicles and data centers;
    Cybersecurity: Complex security methods require more microelectronics for encryption;
    Industry 4.0: automation of production through the introduction of digital equipment and the Industrial Internet of Things (IIoT).
    As a result of the development of these areas , it is predicted that the volume of demand for microelectronics products will grow at a rate of 8% to $1 trillion in 2030. The largest share of the growth of the global microelectronics market will be provided by two industries: computing equipment and telecommunications equipment. The total share of these industries in the consumption structure will grow from 58% in 2024 to 65% in 2030 .

    Supply-side trends

    The microelectronics market is global in nature, with distinct regional specialization in individual production processes. The emergence and deepening of specialization at individual stages of chip creation is due to the capital intensity and complexity of the production process. The microelectronics market is characterized by high consolidation: 56% of its volume is controlled by ten leading companies from the United States, the Republic of Korea, and Germany. According to the results of 2024, the top three leaders in terms of revenue from sales of microelectronics products were Samsung Electronics, Intel, and Nvidia. Nvidia’s capitalization reached $4 trillion in July 2025 , overtaking Apple and Microsoft and becoming the most expensive company in the world.

    The bulk of the supply is provided by a pool of six countries/regions: the United States, the European Union, South Korea, Japan, Taiwan, and China. The United States is a leader in both chip development and production. The development and implementation of new technologies in the field of microelectronics is carried out in the above countries/regions under the auspices of state implementation centers with large-scale state funding.

    Structure of the global microelectronics market by production processes, % of revenue of companies in the segment, % of added value (Photo: BCG, SIA)
    The key trend in the development of the production base in the field of microelectronics is sovereignty. Against the background of a high level of geographical concentration – geopolitical tension (which is expressed in the strengthening of trade restrictions and the intensification of armed conflicts), the possibility of pandemics or natural disasters – there are significant risks of failure of logistics chains. This dictates the need to develop full-cycle production within individual countries/regions. The effect of this trend is manifested in all leading manufacturing countries: they are implementing development programs and introducing market protection mechanisms.

    Currently, the leaders in terms of production capacity are  China, Taiwan and South Korea, the total share of these countries in the global capacity is more than 60%. The largest construction of new factories on the horizon of 2024-2032 is planned in Taiwan, the USA and South Korea, which should ensure an increase in production capacity in these countries within the specified period by 97, 203 and 129%, respectively.

    All development programs implemented by leading countries in the industry cover priority areas of technological development: the transition to smaller topologies and new packaging methods, the use of new materials, the use of photonic components, research in the field of quantum computing, and the creation of new transistor architectures.

    In addition to the leading countries, development programs are being implemented in a number of other countries. The largest investments among them are being made in India. A number of projects are being implemented here to create production facilities in partnership with world industry leaders: Micron, AMD, etc. Against the backdrop of the growing confrontation between the US and China, India may become the “second China” in the field of microelectronics production.

    Microelectronics production facilities

    The market for equipment and technologies in the field of microelectronics remains highly concentrated, with the key players being the US, EU and Japan. These countries account for 96% of global equipment production. The highest concentration is observed in the lithography and photoresist production segments, with individual countries accounting for approximately 90% of production (the leader in lithograph production is Holland, and in photoresist production, Japan).

    As for the relatively small segments in terms of share in the value chain – EDA, IP cores – here the three countries mentioned hold almost 100% of production with a significant predominance of US companies.

    The trend towards sovereignty that is characteristic of the microelectronics industry also covers the equipment and technology market. The most striking example illustrating this trend is China. As of 2022 , mainland China accounted for 20% of global equipment spending and 18% of global equipment imports. Due to export restrictions by the United States, Japan and the Netherlands, there is a need to develop domestically produced alternatives. To ensure technological independence in this area, China invested $25 billion in microelectronics equipment in the first half of 2024, with the total volume estimated for last year at about $50 billion . However, despite the implementation of an ambitious development program in 2012-2024, China is still behind in the technological development race in microelectronics.

    Cases of leading companies

    Key trends in the industry’s development directly affect the plans of manufacturing companies. At the same time, the largest suppliers of microelectronics not only respond to emerging trends, but also set the direction of the industry’s development.

    Samsung, one of the leading manufacturers of microelectronics, focuses on development in the following areas:

    increasing production of DRAM memory intended for AI accelerators;

    expansion of the product line for the automotive industry (the company is implementing a roadmap for the period up to 2027 for the release of eMRAM for use in vehicles);
    Together with another South Korean microelectronics maker, SK Hynix, the company plans to spend more than $470 billion to create a chip manufacturing cluster to achieve technological sovereignty.
    The development directions of another participant in the top 3 microelectronics manufacturers, Intel, are also based on key industry trends:

    focusing on AI in the development of new processors (most of the company’s developments presented at the Consumer Electronics Show (CES) 2025 are aimed at working with advanced language models and increasing the speed of AI-related operations);
    Intel is aggressively building factories in the US under a strategy known as IDM 2.0 , which helps protect against supply chain disruptions.

    NVidia, also one of the top 3 microelectronics manufacturers, presented the company’s plans for the future up to 2028 at the Computex 2025 and CES 2025 exhibitions. The key development areas for the coming years will be:

    The company is building its own data centers. NVidia is moving from supplying equipment to data center operators to designing and building new computing facilities. This is being implemented through partnerships with TSMC, Foxconn, Gigabyte, Asus, and others.
    Industry 4.0 developments: development of robotics and digital twins. An example of a project in this area is the creation of a digital twin of a new car manufacturing plant created by NVidia in partnership with BMW. The digital twin allows for modeling, testing and optimizing production processes before the plant itself is launched. The project aims to reduce the time it takes to create real production, as well as to increase the efficiency of production processes.
    Developments in the field of autonomous transport and ADAS. NVidia is developing a number of projects in partnership with industry leaders: Toyota, General Motors and others.

  • What are embeddings and how do they help AI understand the world better?

    What are embeddings and how do they help AI understand the world better?

    Together with an expert, we figure out how embeddings work, why they have become the basis of intelligent systems, how they are interconnected with cloud technologies, and what significance they have for the infrastructure of the future

    Artificial intelligence can write texts, recognize faces, recommend products, and even predict industrial failures — all thanks to its ability to understand abstract data. At the heart of this ability are embeddings, one of the key tools of machine learning. They allow complex and heterogeneous objects — words, images, products, users — to be translated into digital language that a machine can understand. Without them, AI would be just a set of formulas.

    Why AI Needs ‘Digital Translation’

    Human language is polysemantic and contextual. When we write “lock,” we can accurately determine whether we are talking about a fortress or a device for locking doors, depending on the context. For a machine, this is a challenge: words, images, events, search queries — all of this must be translated into a numerical format in order to compare, analyze, and train models.

    Embeddings are a way to do this translation. A word, image, or other object is represented by a vector — a numerical representation in a multidimensional space, trained on statistical relationships or large language models. These vectors allow the system to determine similarities between objects, build dependencies, and draw conclusions. For example, the embedding of the word “cat” will be closer to “animal” than to “car.”

    An example of a 3D projection of embeddings. Although real LLM embeddings cannot be visualized, the principle remains the same – words that are close in meaning are grouped in space (Photo: GitHab)
    A 2024 study showed that embeddings obtained using GPT‑3.5 Turbo and BERT AI models significantly improve the quality of text clustering. In the tasks of grouping news or reviews by topics, they allowed increasing cluster purity metrics and improving processing accuracy.

    How Embeddings Help AI Understand the World

    Embeddings enable neural networks to find connections between objects that are difficult to specify manually. For example, an online store’s recommendation system can determine that users interested in “hiking backpacks” often buy “hiking water filters” — even if these products are not directly related in the catalog. Embeddings capture statistical dependencies, user behavior, contexts, and even stylistic features of the text. This is the key to creating personalized services and scalable intelligent systems.

    The main task of embeddings is to transform complex data (text, images, behavior) into a set of numbers, in other words, a vector that is convenient for algorithms to work with. Vectors help AI find similarities, understand meaning, and draw conclusions. Moreover, many things can be represented as embeddings: individual words, entire phrases and sentences, images, sounds, and even user behavior.

    Texts and Language (NLP)

    It is important for AI not just to “read” the text, but to understand what is behind it. Embeddings allow models to capture hidden connections between words, to determine that, for example, “cat” is closer to “animal” than to “car”. More complex models can create embeddings not only for words, but also for entire sentences – this helps to more accurately analyze the meaning of phrases, which is important, for example, for chatbots or automatic translation systems.

    Images and visual content

    In computer vision, embeddings allow you to turn an image into a set of features — color, shape, texture, etc. This helps algorithms find similar images, recognize objects, or classify scenes: for example, distinguishing a beach from an office.

    Recommender systems and personalization

    Modern digital platforms create embeddings not only for content (movies, products), but also for the users themselves. This means that each user’s preferences are also represented as a vector. If your vector is close to another person’s vector, the system can offer you similar content. This approach makes recommendations much more accurate.

    How Embeddings Are Created: From Simple to Complex

    Embeddings can be thought of as a multidimensional space where each point is an object (word, image, user). The proximity between points in this space reflects the learned similarity. For example, in Word2Vec (an algorithm that turns words into vectors that reflect their meaning and similarity in meaning), the vectors of the words “king” and “queen” will be close, and their difference will be close to the difference between “man” and “woman”. However, in more modern models (e.g., BERT), vectors depend on the context, and such linear dependencies are weaker.

    There are different ways in which AI translates text, images, or sound into vectors—aka embeddings.

    Classic text models (e.g. Word2Vec or GloVe) create one vector per word. The difficulty is that they do not take into account the context. For example, the word “onion” will mean both a vegetable and a weapon – the model will not understand the difference.
    Modern transformer-based models (BERT, GPT, and others) work differently: they analyze the environment in which the word occurs and create a vector that corresponds to this very meaning. This is how AI understands what kind of “bow” we are talking about – green or rifle.
    For images, embeddings are built differently. Neural networks trained on huge arrays of images “extract” visual features from them: colors, shapes, textures. Each object in the image is also represented by a vector.
    Multimodal embeddings combine data from multiple sources at once — text, images, audio, video — and present them in one common vector space. This allows AI to find connections between different types of data. For example, to recognize that the caption “kitten playing with a ball” refers to a specific moment in a video or a fragment in a photo.
    Embeddings are at the heart of recommendation systems, voice assistants, computer vision, search systems, and many other applications. They allow us to find connections between objects, even if these connections are not explicitly stated.

    More and more often, attention is paid to adapting embeddings to specific tasks. For example, a model can not just “understand what the text is about,” but form a representation specifically for the desired purpose – be it legal analysis, customer support, or medical expertise. Such approaches are called instruction-tuned and domain-specific.

    Where Embeddings Live: Cloud Servers for AI

    Training and using embeddings is a resource-intensive process. Especially when billions of parameters and multimodal data are involved. Such tasks require:

    a large amount of computing power with GPU resources (specialized graphics processors designed for resource-intensive tasks);
    storage of vector databases;
    fast indexing and searching for nearby vectors;
    low latency in response generation, such as in chatbots and search.
    Therefore, the development of embeddings is closely linked to the growing demand for cloud computing and infrastructure optimized for AI workloads. To work with embeddings, businesses need not just virtual machines, but specialized servers with GPU support, high-speed storage, and flexible scalability.

    Such cloud solutions allow you to train and retrain your own models, launch services based on LLM, and integrate AI algorithms into websites, applications, and analytical systems. Cloud servers remove the barrier to entry into AI — businesses do not need to invest in their own cluster, they just need to choose the appropriate configuration for their model or service.

    Today, embeddings are the basis of search, recommendations, content generation and automation. In the coming years, they will become even more complex, individualized and contextual. AI will increasingly recognize the meaning, goals of the user, the context of interaction – and offer relevant answers. According to analysts , the global artificial intelligence market will grow by about 35% per year and will amount to $1.8 billion by 2030.

    But without a reliable infrastructure — fast, scalable, with support for vector databases and GPUs — such systems will be either slow or unavailable. That’s why the development of embeddings and cloud infrastructure go hand in hand: the former provides intelligence, the latter — power and flexibility.

  • Tech Trends for 2025: AI, Crypto, and Tech Synergies

    Tech Trends for 2025: AI, Crypto, and Tech Synergies

    Tech trends for the coming year cover a wide range of areas, such as artificial intelligence, quantum computing, spatial technologies and hardware development. Deloitte analysts have compiled a report highlighting the key trends that will shape technology in the coming year. Komputerra has studied the report and reviewed the key trends in this article.

     

    Artificial intelligence is everywhere

    AI is becoming an integral part of life, like electricity or the Internet. By 2025, it is expected to be used in almost every area, from optimizing urban traffic to personalizing healthcare. 

    Generative models such as large language models (LLM) will continue to evolve, but the focus will shift to using more compact models capable of solving highly specialized problems. Such approaches will provide greater efficiency and resource savings. According to experts, the future belongs to “agent AI”, capable of not only analyzing data, but also performing specific tasks, such as automating business processes, managing supply chains, or creating personalized recommendations for users.

    As AI becomes more popular, the need to integrate it into various devices and applications also increases. For example, intelligent assistants will be able to perform complex actions based on the analysis of historical data, which will open up new horizons for automation. Experts note that the development of AI raises issues of ethics and safety that require active attention. Moreover, the integration of AI into everyday life will be accompanied by a change in the labor market: many professions may transform or disappear altogether, giving way to new roles related to the management and development of AI technologies.

    Spatial technologies and their integration with AI

    Spatial computing is gaining attention due to its ability to merge the physical and digital worlds. It enables new ways to interact with data through 3D models, digital twins, and augmented reality. For example, in the medical field, this will allow doctors to visualize patients’ anatomy in real time, and in construction, to model objects before they are physically built.

    The combination of spatial technologies and AI creates the basis for new forms of interaction. Future systems will be able not only to display data, but also to suggest optimal actions based on situational analysis. For example, in logistics, such systems will be able to model delivery routes in real time, taking into account weather conditions, traffic jams, and other factors. 

    The development of these technologies can also revolutionize education by providing interactive and adaptive learning platforms. Spatial technologies are also becoming particularly relevant in manufacturing processes, where digital twins allow operations to be simulated and optimized before they are actually carried out, saving resources and time.

    Hardware comes to the fore

    After years of software dominance, hardware is once again becoming a key element of technological innovation. Growing AI demands require specialized computing chips. This is leading to a growing market for AI devices and personal computers with built-in AI chips. Such devices provide autonomous computing, which reduces the load on cloud services and improves data security.

    However, this trend comes with challenges in the energy sector: data center energy consumption is expected to increase significantly. To address this, companies are investing heavily in energy-saving technologies such as renewable energy and optimized cooling systems. In addition, a new class of devices is emerging that can perform computations right at the “edge” of the network, minimizing latency and reducing dependence on centralized servers.

    In the future, hardware development will go beyond specialized chips. Devices will begin to integrate AI into things like smart home control systems, autonomous cars, and wearables. This will lead to fully autonomous solutions that can adapt to changing environmental conditions.

     

     

    Quantum Computing and New Approaches to Cryptography

    With the advent of quantum computers in the next 5-20 years, current encryption methods may become vulnerable. That’s why companies are already starting to implement post-quantum cryptographic standards. This process takes time, but it cannot be delayed to ensure the security of data and communications in the future.

    Quantum computing also opens up new possibilities for data analysis and modeling of complex systems. For example, it could be used to develop new drugs or optimize global logistics networks. 

    Experts note that companies that start investing in these technologies now will be able to gain a significant competitive advantage. In addition, quantum technologies will be actively used to solve problems that require high-precision calculations, such as financial modeling or predicting climate change.

    Digital transformation of core systems

    Key enterprise systems are being transformed by AI. Integrating AI into core systems enables automation of routine operations and the creation of more intelligent processes. However, this complicates the architecture of such systems, requiring deep technical expertise and a strategic approach to modernization.

    In addition, new technologies help to improve the transparency and usability of such systems. For example, the integration of AI allows for more accurate demand forecasting and resource management. However, it is important to consider that automation should not completely replace human control, especially in critical areas such as healthcare or finance. In parallel, enterprises will be forced to strengthen their cybersecurity efforts to protect updated systems from new threats.

    The Power of Technology Intersections

    One of the new strategies for innovative development is the combination of different technologies and industries. For example, combining spatial technologies with AI can accelerate the development of solutions for logistics or healthcare. Such targeted intersection opens up new market opportunities and strengthens the competitive advantages of companies.

    A striking example is the interaction of AI with robotics. This allows the creation of autonomous systems capable of performing complex tasks, such as collecting data in conditions dangerous to humans. Such technologies can also be used in agricultural technology, helping to optimize crop cultivation processes. Moreover, the integration of technologies in smart city ecosystems will optimize resource management and improve the quality of life of citizens.

    Conclusions

    In 2025, technology will largely evolve around artificial intelligence, creating smarter, more sustainable, and more intuitive solutions. In addition, spatial computing, new hardware, and preparation for the era of quantum computing are shaping a future in which technology will become even more integrated into our daily lives.

    Companies that take action today will not only adapt to new conditions, but also become leaders. Investments in advanced technologies, training specialists and implementing flexible strategies will allow them to use the potential of innovation to the maximum. Successful market players are already paving the way to a sustainable and efficient future, creating products and services that will form the basis of tomorrow’s world.

  • AI, Agents, and Hybrid Platforms: How Technologies Are Rebooting the Cloud

    AI, Agents, and Hybrid Platforms: How Technologies Are Rebooting the Cloud

    How cloud technologies are changing under the influence of AI, how AI assistants differ from agents, and what security challenges arise when neural networks are introduced

    How Clouds Are Changing Under the Influence of AI

    — How is the approach to cloud management changing with the development of AI?

    — The approach is changing dramatically, not only to managing cloud infrastructure, but also in general to how people interact with digital systems, interfaces, and applications.

    The classic search engine is fading into the background – it is being replaced by neural network tools, such as, for example, GigaChat, ChatGPT, where the user wants to immediately receive a relevant, human, easy-to-understand answer. And not just find a link, but get a specific personalized result.

    We looked at this trend and created our own solution based on generative AI — the AI ​​assistant “Claudia”. This is not just a chatbot — it is a system that helps automate a large number of tasks related to cloud infrastructure.

    For example, a developer can already give Claudia the task of deploying a virtual machine, setting up a connection, deploying an infrastructure monitoring dashboard, and setting up alerting. Plans call for adding the ability to create a Kubernetes cluster or database through a dialogue with Claudia. Moreover, the solution also helps those who are not involved in IT at all — a conventional marketer or small business owner can quickly build a website or launch a landing page. All that is needed is to describe the task in text, like in a regular messenger. And the necessary actions will be performed by the AI ​​assistant, and not just voice recommendations.

    — In recent years, there has been increasing talk about AI assistants and AI agents. What is the fundamental difference between these entities and how does it manifest itself in cloud platforms?

    — Here’s a simple analogy: a chatbot is a vending machine. It can only perform the actions that are pre-programmed into it. It’s based only on programmed scenarios. If you want to buy a drink, the machine will give you exactly what you put in it.

    An AI assistant, or AI assistant, is already a “big brother”. It will not do everything for you, but will tell you how best to act, what steps to take, what to try. Such an assistant can work with a large array of information and help in decision-making, but the action is up to you.

    But an AI agent is an autonomous system. It sets and executes a task, and can do it without human intervention. The agent initiates actions, accesses the necessary sources, launches processes – for example, it can not only tell you how to book a hotel, but do it itself. It can process a request, enter data into the CRM, select a tour and make a reservation – all in the background. In the cloud, an agent can already deploy a basic infrastructure on its own in the near future, for example, launch a virtual machine or set up monitoring. Here, the AI ​​assistant becomes an access window to a complex digital system – the user sees a communication interface, and under the hood, a full-fledged multi-agent architecture works.

    — What tasks are AI assistants and agents already helping to solve in real business scenarios today?

    — Technologies already cover a whole range of areas — from user support to optimization of complex industrial processes and supply chains. I would name four key scenarios.

    The first is user support. The introduction of agents and assistants has a tangible effect. For example, now every tenth question to Cloud.ru customer support is processed by AI. The plan is to increase the share to 70% without reducing the quality.

    The second scenario is marketing and sales. AI helps marketers cope with tasks related to creativity, “white sheet”, content creation. In our company, the AI ​​assistant “Claudia” promptly generates preliminary calculations for commercial offers directly in chats with customers.

    The third scenario is development. We actively use AI assistants in development teams. In addition, there are solutions like GigaCode that help others implement generative AI in development. This is also a strong trend: tools like GitHub Copilot are no longer exotic, but part of everyday work.

    If we look at it more broadly, then multimodal AI is coming to the forefront globally. An example is a recent Google presentation: a user asks how to fix a bicycle, and the assistant builds a sequence of actions using an image and text. This is an agent system and a multimodal neural network in action. We are also moving in this direction in Russia. More and more companies are testing such scenarios.

    — To what extent does the implementation of AI agents in cloud management today correspond to global practices?

    — We at Cloud.ru have developed our own cloud environment for creating Cloud.ru Evolution AI Factory agents. Our colleagues from other Russian companies are also working in this direction, creating LLM platforms and AI services.

    Most tools are still built on open source technologies. This provides certain advantages for wider application of AI. Firstly, Russian developers take an active part in the development of international open source projects. We make a serious contribution to the development of the global AI ecosystem. And secondly, even relying on open solutions, you can create unique products. Therefore, I believe that our market has made significant progress in the development of AI solutions over the past year and a half.

    Implementation of AI in the company: barriers and solutions

    — Despite widespread implementation, not all companies have switched to working with AI in practice. What prevents companies from effectively implementing artificial intelligence in work processes today?

    — Under artificial intelligence we often unite a very broad range of technologies. I will focus on AI agents and generative AI.

    The first barrier, which is often forgotten, is the need to find and formulate a business case for using AI. It must be specific, digitized, with clear value for the company and clear metrics for executives.

    Barrier number two is security. This is the most sensitive topic. Various assessment tools and standards are already emerging, but risks remain. Especially if a company uses open APIs, sends sensitive data, and does not have clear regulations on how to work with AI.

    The third barrier is technical. The architecture of multi-agent systems, the complexity of integration, the lack of resources. Not every company has a mature team of developers ready to quickly integrate AI into processes. This is especially true for those who already have their own systems built and need to implement new technologies into them.

    The fourth barrier is the lack of data. Even the most advanced models are useless without training material. Data needs to be collected, labeled, verified, stored, and this is a huge job.

    Finally, the fifth barrier is the issue of internal culture. And here it is important to note that, according to a Gallup study , only 16% of employees use AI in their daily work, despite the fact that among managers this figure is already 33%. That is, the gap between the levels is colossal. And it is also associated with the lack of a culture of safe and conscious use of AI tools. Therefore, the transformation must go not only from the top, but also deep, through training, involvement, demystification and democratization of technology.

    — Today, there is a lot of talk about the democratization of artificial intelligence. How realistic is it to make work with AI and cloud technologies accessible to those who do not have specialized expertise?

    — More and more low-code and no-code platforms are appearing on the market, which allow developing agents without programming skills. This is no longer science fiction: you simply set the logic through a visual interface, select tools, set up a sequence — and the agent works.

    But complex scenarios still require developers. Especially in large businesses, where systems have been developed for years and contain unique elements. There, open source or a visual designer is not enough – deep integration is needed.

    we offer tools for these two audiences. On the one hand, our platform is open to developers who can customize everything for themselves. On the other hand, there are ready-made tools that even a less experienced user can handle. In general, we strive to make the entry threshold ever lower and the interface ever more intuitive.

    — What challenges do companies face when integrating AI into existing infrastructure?

    — The main challenge today is the lack of computing resources. And this concerns not only Russia, but also the global market. This problem is especially acute due to the growing popularity of language models: they require powerful GPUs, scalable architecture, and stable channels.

    We solve this problem both in terms of providing the capacities themselves and through tools that help use them efficiently. A simple example is our Cloud.ru Evolution ML Inference service. It allows you to use GPUs as flexibly as possible: for example, distribute the capacities of one video card into several parts and run several models on it at once. This reduces costs and increases efficiency.

    Another challenge is a paradigm shift. Many companies are used to infrastructure as hardware. They come to the cloud for servers, GPUs, storage units. But today the cloud is no longer about hardware, but about intelligent tools. We want both developers and regular users to perceive the cloud as a place where they can simply solve their problem using ready-made and understandable services, without delving into the infrastructure.

    And the third challenge is not directly related to technology or business approach. It is about creating a culture of using AI in the company. This requires providing opportunities to use tools, consciously creating challenges, and encouraging the use of AI to solve problems.

    Security and horizons of AI development

    — How is the approach to ensuring security changing against the backdrop of the active implementation of artificial intelligence? What risks do companies face most often?

    — Approaches to security are changing today, and companies are beginning to review regulations at all levels. This concerns internal regulations, work with employees, and technical architecture.

    To put it bluntly, security is not just about technology. It is about culture. Statistically, most leaks are not caused by hacks, but by mistakes made by employees themselves. Someone accidentally sent data to an open API, someone saved a sensitive file in the wrong cloud. That is why it is important to educate users — to tell them what they can do with AI, how to work with open interfaces correctly, and what information should never be transferred to third-party services.

    Next, technical measures. One of the priorities is to deploy models in a closed circuit, without going beyond the borders of Russia. Especially when it comes to working with personal data. We provide this opportunity: deploy the model locally, access it through a private channel, and be sure that no data will leak.

    We also use various methods of protection against prompt injection (a type of attack on large language models, in which an attacker introduces malicious or unexpected instructions into the prompt text to change the model’s behavior. — Ts Trends ), track the behavior of models, and monitor their operation. Without constant analytics and control, it is simply impossible to implement such tools into a corporate infrastructure.

    — How has the user experience of interacting with AI tools changed over the past few years? What trends do you see?

    — Firstly, voice interfaces. There are already “smart” speakers, voice assistants like “Alice”, “Salut” — almost every home has such a device. But, oddly enough, voice has not yet become a full-fledged entry point into AI. People are not always ready to solve complex problems with their voice. But this is temporary, we are not writing off voice interfaces, they will develop.

    The second is maximum simplification of interfaces. The user should not fill out complex forms or understand the cloud structure. The ideal scenario: the agent does everything for you, and you simply confirm the action. Minimum clicks – maximum results. In our products, we strive for exactly this scenario, with maximum automation and ease of use of cloud services.

    An important source of insights for us is the young audience. Look at how children use Minecraft, Roblox: they intuitively understand the mechanics. Interaction with the digital environment is natural for them. This is where, I think, new approaches to UI will come from. We must create interfaces in which the user simply formulates the task, and the AI ​​does the rest for him.

    In the future, we will need to answer the question: how far can we advance in the symbiotic interaction of humans and AI? On the one hand, technologies offer us the opportunity not only to relieve humans of routine, but also to expand the boundaries of their potential. On the other hand, automation carries risks: the structure of some algorithms is not always clear, questions arise about the balance between responsibility and trust in the machine. The outcome is still impossible to predict – much will depend on how we manage to build interaction taking into account ethics, flexible architecture, and the culture of use. I am convinced that our task is to move in this direction as consciously as possible, experimenting so that technologies truly become an organic and safe extension of humans, and not their competitor.

    — What do you think will be the next milestone in AI development and where is the market heading in general?

    — I agree with the idea recently voiced by the head of OpenAI, Sam Altman: a real revolution will happen when one person creates a $1 billion business with the help of AI. I think this is quite realistic and we are getting closer to it.

    When it comes to technology, I see several big trends.

    The first is automation of agent systems. More and more tasks will be performed autonomously, without human intervention. These are not just assistants, they are full-fledged digital employees. And companies are already starting to perceive them as such.

    The second is robotics. And not in the sense of humanoid robots, but industrial, production solutions that will perform tasks in a warehouse, at a factory, in logistics. The CEO of Nvidia also talks about this: the reduction in the cost of components, the growth of computing power makes this area extremely promising. And here the demand for cloud services and infrastructure for training AI models at their core will grow.

    Another important trend is hybrid platforms. For example, we are developing Evolution Stack — an environment where you can use your own infrastructure and connect to a public cloud if necessary. This is especially important in conditions where there is a shortage of computing resources, but you need to scale quickly. At our GoCloud conference in April, we announced the development of Cloud.ru Evolution Stack AI bundle — a platform for solving AI problems in a hybrid scenario. It simplifies the launch and scaling of AI services in business and lowers the entry threshold for users to develop AI-based solutions.

    Of course, we see a trend towards moving from large models to domain-specific Small Language Models . This is a logical step: optimize the use of infrastructure, increase accuracy, reduce costs. SLM is one of those tools that really brings AI closer to business.

    And finally, data. The question of accessibility of high-quality data remains open. Amazon and other big players invest in companies that do labeling. Without this, no AI will work. Therefore, data, its protection, local storage are also part of the AI ​​landscape of the future.