The business magazine presents an overview of 12 strategic technology trends for the next 5 years. Why they are so important today for society and business. First of all, trends and innovations represent fundamental technical capabilities that everyone (both companies and specialists) needs to compete in the digital world.
These technology trends will drive digital business and innovation over the next 5 years through 2027, according to business analyst forecasts and research.
Content:
AI engineering
AI engineering automates the updating of data, models, and applications to optimize AI delivery. When combined with robust AI governance, AI engineering will enable AI adoption to ensure its continued business value.
As companies continue to innovate in AI, they also need to leverage all their resources—data, models, and compute.
Companies should consider ModelOps for implementing AI solutions. ModelOps reduces the time it takes to move AI models from pilot to production with a principled approach that can help ensure a high success rate. It also offers a system for managing and managing the lifecycle of all AI (graphic, linguistic, rule-based, and other decision models).
Data Fabric
Data Factory enables flexible and resilient integration of data sources across platforms and business users, making data available wherever it is needed, no matter where it is.
Data Fabric is the continuous analysis of existing, discovered, and prospective metadata assets to support the design, deployment, and consumption of integrated and reusable data across all environments, including hybrid and multi-cloud platforms.
Data Fabric can use analytics to learn and proactively recommend where data should be used and changed. This can reduce data management efforts by up to 70%.
A data factory uses both human and machine capabilities to access data in place or support its consolidation where needed. It continuously identifies and connects data from disparate applications to discover unique, business-critical relationships between available data points.
Cybersecurity Network
The Cybersecurity Network is a modern conceptual approach to security architecture that enables the distributed enterprise to deploy and extend security where it is needed most. The network is a flexible, composable architecture that integrates widely distributed and disparate security services.
In 2024, companies that implement a mesh cybersecurity architecture will reduce the financial impact of security incidents by an average of 90%. Companies now support multiple technologies in different locations, so they need a flexible security solution.
The network extends identities beyond the traditional security perimeter and creates a holistic view of the organization. It also helps improve the security of remote work. These requirements will drive adoption over the next few years.
The cybersecurity network enables best-in-class autonomous security solutions to work together to improve overall security while moving control points closer to the assets they are designed to protect. It can quickly and reliably verify identity, context, and policy compliance across cloud and non-cloud environments.
Privacy-enhancing computing
Privacy-enhancing computing methods: These technologies protect data while it is in use—as opposed to when it is at rest or in motion—enabling secure data processing, sharing, cross-border transfer, and analytics, even in untrusted environments.
Privacy-enhancing computing uses a variety of privacy-protecting techniques to extract value from data while meeting regulatory requirements.
This technology is rapidly transforming from academic research into real-world projects that deliver real value, opening up new forms of computing and sharing with reduced risk of data leakage.
Cloud platforms
Cloud platforms are technologies that enable new, resilient, elastic, and flexible application architectures that can respond to rapid digital change. Cloud platforms improve on the traditional piecemeal approach to the cloud, which fails to take advantage of the cloud and adds complexity to maintenance.
Cloud Native Application Protection Platforms (CNAPPs) CNAPPs bring together multiple cloud security tools and data sources, including container scanning, cloud security posture management, infrastructure as code scanning, cloud infrastructure rights management, and runtime cloud workload protection platforms.
Secure Access Service Edge (SASE) SASE is delivered as a service and provides access to systems based on device or object identity combined with real-time context and security and compliance policies.
SASE provides several converged network and security capabilities, such as SD-WAN and zero-trust network access (ZTNA). It also supports branch offices, remote workers, and general on-premises internet security.
Security Service Edge (SSE) SSE protects access to the network, cloud services, and private applications. Capabilities include access control, threat protection, data security, security monitoring, and acceptable use controls, all delivered through network-based and API integration.
Composite applications
Composite applications are built from modular, business-focused components. Composite applications simplify the use and reuse of code, accelerating the time to market of new software solutions and increasing enterprise value.
Traditional digital AI models have limited adaptability because they cannot generalize beyond the data they were trained on. PIAI creates a more robust representation of the context and physical product. (PIAI) is an AI that can create physically and scientifically sound AI models. PIAI has attracted particular interest as a more effective option for modeling complex systems such as climate and environmental issues that are difficult to model given their scale.
The pandemic crisis has exposed the vulnerabilities of overly brittle business models. PIAI creates a more flexible representation of the context and conditions in which systems operate, allowing developers to create more adaptive systems. It can also create more robust and adaptable business modeling systems that are more robust across a wider range of scenarios.
Other emerging technologies in this area include composite applications, composite networks, and influence engineering.
Intelligence in decision making
Decision intelligence is a practical approach to improving organizational decision making. It models every decision as a set of processes, using intelligence and analytics to inform, learn, and refine decisions.
An intelligent decision-making system can support and improve human decision-making and possibly automate it through the use of advanced analytics, modeling, and artificial intelligence.
Hyperautomation
Hyperautomation is a disciplined, business-focused approach to quickly identify, validate, and automate as many business and IT processes as possible. Hyperautomation enables scalability, remote work, and business model disruption.
Hyperautomation involves the coordinated use of multiple technologies, tools or platforms, including:
Artificial Intelligence (AI)
Machine learning
Event-driven software architecture
Robotic Process Automation (RPA)
Business Process Management (BPM) and Intelligent Business Process Management (iBPMS) suites
Integration Platform as a Service (iPaaS)
Open Source / No Code Tools
Packaged software
Other types of tools for automating decisions, processes and tasks
Distributed Enterprises
Distributed enterprises reflect a digital-first, remote-first business model to improve employee engagement, digitize consumer and partner touchpoints, and improve product experiences.
Distributed businesses are better positioned to serve the needs of remote workers and consumers, who are fueling demand for virtual services and hybrid workplaces.
General experience
Shared experience is a business strategy that connects employee experience, customer experience, user experience, and multi-user experience across multiple touchpoints to accelerate growth.
Shared experience can enhance trust, satisfaction, loyalty and support for customers and employees by holistically managing the stakeholder experience.
Autonomous systems
Autonomous systems are self-governing physical or software systems that learn from their environments and dynamically change their own algorithms in real time to optimize their behavior in complex ecosystems.
Autonomous systems create a flexible set of technological capabilities that can support new requirements and situations, optimize performance, and defend against attacks without human intervention.
Generative AI
Generative AI learns about artifacts from data and creates new innovative creations that are similar to the original but not identical. Generative AI can create new forms of creative content, such as video, and speed up research and development cycles in fields ranging from medicine to product design.
Organizations can use generative AI, which creates original media content, synthetic data, and models of physical objects. For example, generative AI was used to create a drug to treat obsessive-compulsive disorder (OCD) in less than 12 months.
Tech trends drive digital business
The top strategic technology trends will accelerate digital capabilities and drive growth by addressing common business challenges for CIOs and technology leaders. They offer a roadmap to differentiate your organization from competitors, achieve business goals, and position CIOs and IT leaders as strategic partners within the organization.
Each produces one of three main results:
Engineered Trust : Technologies in this segment create a more resilient and efficient IT foundation by enabling more secure integration and processing of data across cloud and non-cloud environments to enable cost-effective scaling of the IT foundation.
Sculpting Change : By releasing new creative technology solutions in this space, you can scale and accelerate your organization’s digitalization. These technology trends allow you to respond to the increasing pace of change by creating applications faster to automate business activities, optimize artificial intelligence (AI), and provide faster, smarter decisions.
DeepTech analysts and experts predict technological breakthroughs in energy storage, artificial intelligence, defense, and quantum computing in 2025. DeepTech companies aim to change the world through scientific, engineering, and technological advances.
As technology advances, researchers seek to apply engineering and technological advances in areas such as processing and computing architecture, semiconductors and electronics, materials science, vision and speech technologies, AI and machine learning, and more.
Content:
Quantum computers scale up
Quantum computers continue to evolve with increasing numbers of interconnected physical qubits in various qubit implementations. As these systems scale, the need for quantum intra- and interconnectivity becomes increasingly apparent. Distributed and networked quantum computers, thanks to innovations in quantum transducers, repeaters, and switches, will enable this scaling.
With the development of quantum technologies, additional security risks are bound to emerge, on top of the existing security concerns. NIST has introduced new standards for post-quantum cryptography, and replacing existing cryptography will be both a huge challenge and an opportunity. We also expect innovation in cryptography and security, and AI will likely play a bigger role in this area than ever before.
Active investment in quantum computing
Europe has historically lagged behind the US and, to a lesser extent, China in developing quantum computing technologies, both in terms of hardware and software. This is changing rapidly with the emergence of a large number of increasingly well-funded startups, such as Plancq (Germany), Pasqal (France), Universal Quantum (UK/Germany), Quantum Circuits (US) and Quandela (France/Germany).
Given the sovereign aspects of the quantum computing race, we expect continued investment in European companies at relatively high valuations, as well as a push to build a commercial and partner ecosystem around them.
Wider use of AI
Artificial narrow intelligence (AI applied to specific problems) has found its application in many verticals, but there are still many industries that can benefit from AI advances. We expect to see more AI in industries such as agriculture, healthcare, and cybersecurity.
As these models become more complex, the field of machine learning operations (MLOps) is likely to grow even more. The development of artificial general intelligence (AI that can learn to solve many problems) will raise ethical concerns.
AI will allow battery management
In the face of global energy supply challenges, the ability to optimize energy consumption is paramount. Likewise, for the EV market to realize its potential, the industry still needs to invest in layers of battery management systems that are context-aware and can comprehensively manage not only battery depletion but also battery charging. AI is critical to enabling both use cases.
We expect to see major corporate players in energy, building management, automotive and mobility, including their supply chains, actively investing and acquiring assets in the AI implementation space.
Battery technology will be in focus
Redoubling efforts to optimize energy consumption will be a key priority in climate technology. Partly out of necessity, energy storage will gain momentum, especially at the industrial level. New and reusable batteries will benefit from the dry powder of climate venture funds.
Alternatives to lithium-ion batteries for energy storage – from alternative metals to new technologies like flow batteries – these new technologies will receive more attention. Reasons:
We are witnessing an energy crisis in Europe. At the current rate of consumption and with the geopolitical and macroeconomic environment looking increasingly bleak, Europe is likely to exhaust its energy reserves.
European DeepTech Companies Will Aim for Sovereignty
The momentum towards greater national sovereignty and European autonomy will grow. In this regard, defense spending will also become a more important factor, as in the US. While these major trends will lead to increased government support and spending, a specific strategy for a deep tech company to position itself in the new reality will become even more important.
Generative AI Will Reach Europe
AI technologies will be used to generate content based on language, images, and video, and will impact everything related to marketing and customer engagement. AI-powered content generation is currently just the tip of the iceberg and will evolve into a variety of content possibilities, including interactive and dynamic content, as well as movies that adapt to each viewer.
New DeepTech Breakthroughs in Healthcare and Defense
The next big breakthroughs in deep tech will be in biotech and healthcare. We will see predictive diagnostics and precision therapeutics companies, as well as a new wave of medicine and therapies specifically designed for women.
In addition, the economic downturn, war in Europe and rising tensions between East and West will lead to innovation in European companies developing sensors, space technology, hardware, data and AI, with an emphasis on dual-use technologies.
Tech sector involved in clinical trials
In 2024, we will see another private tech company acquire a traditional biotech to increase its clinical trials expertise.
Over the past few years, we’ve seen AI-designed drugs reach phase 1 clinical trials — Exscientia, Insilico, and Recursion have been leaders in this area. Generative AI and large language models have the potential to greatly improve the efficiency of these efforts. Alpha Fold was the breakthrough of 2021, but it’s just the beginning.
But companies built as technology platforms often have a long way to go before they can launch their own clinical programs. It’s not as simple as hiring a couple of experts and hitting the “go” button — the processes, policies, and expertise needed to ensure that drugs don’t cause harm can be foreign concepts to companies with machine learning and engineering. Platform companies that recognize this gap in their capabilities will reap the rewards.
Tech Leaps Across Deep Tech Sectors
We foresee exciting developments in the coming years. In particular, breakthroughs in materials science will enable a new generation of batteries with new levels of performance, making exciting use cases such as e-aviation commercially attractive. The availability of experimental data at scale and new predictive models will lead to exciting new developments in biochemistry.
The fields of automation and AI-enabled robotics seem to be increasingly fulfilling their long-term promise of actually delivering significant productivity gains. Last but not least, in 2024, the first private European rocket will be launched into low Earth orbit, ushering in a new era and momentum for the European space ecosystem.
10 DeepTech Unicorn Startups in Europe:
There are 10 of the most prominent Deeptech unicorn startups in the European tech ecosystem:
Celonis is a €13 billion mining company.
Northvolt is a €12 billion electric vehicle battery manufacturer.
Incredible: The gaming business in the metaverse is valued at 3.4 billion euros.
CMR Surgical is a €3 billion medical robotics company.
Graphcore is a €2.4 billion AI chip business.
OCSiAl Group – producers of graphene nanotubes valued at 2 billion euros.
Exotec – scaling robotics and automation worth €2 billion.
Volocopter – Electric flying taxis worth 1.7 billion euros.
Cognite is a €1.6 billion machine learning solution for business operations.
MindMaze – €1.5 billion virtual reality interfaces for medical rehabilitation.
AI will spread to biology, energy and cybersecurity
Artificial intelligence is predicted to become an important technology in all industries. Significant progress can be expected in the near future. For example, AI can be used to decipher complex biological processes or applied in the energy sector to manage renewable energy generation and decentralized power grids.
The focus will be on IT security solutions, as they are critical to the security of our economic and social infrastructure. There is a growing demand for advanced threat detection capabilities, as well as automated testing and analysis platforms.
The Metaverse is still on the edge of fantasy
The metaverse is not a single continuum, but rather an interaction between digital platforms and products in Web2 and Web3 and the ways of consuming them. After the hype a year ago, the hype around the metaverse is fading, today tech experts and investors understand that the metaverse is not interesting to the general public, it is just a niche for gamers.
The evolution of devices such as VR headsets, digital glasses, smartphones and other devices act as assistive devices and will allow users to access three-dimensional virtual or augmented reality in which they can work, communicate with friends, conduct business, visit remote locations and access educational opportunities, all in an environment mediated by technology in new and exciting ways.
One area to watch for the metaverse is the workplace. Companies like Nvidia and Microsoft, as well as one of our portfolio companies, Engage XR, have developed platforms for collaborating on digital projects. The use of AR and VR for training in organizations will continue to grow this year.
The Challenges of the Deep Tech Industry
While DeepTech has great potential, it is still a long way from mass adoption. The industry faces a number of unique challenges today. Future adoption will depend largely on how quickly the industry can overcome these challenges. Key challenges include:
Securing funding. Despite the initiatives of several global governments, Dee Tech projects often have a hard time getting funding. Often, the duration of the research can be stretched out without any real guarantee of success. Funding is more likely to go to organizations developing consumer products, as the return on investment is seen earlier and is easier to measure, especially in the early stages.
Identifying market opportunities. Researchers developing DeepTech solutions and products may not be able to identify opportunities to present their developments from both a marketing and economic perspective. Very often, these companies rely on other channels or third-party services for the right marketing strategy and planning. This is where working with incubators or government bodies becomes crucial – countries that provide this opportunity through a well-defined ecosystem will lead the deeptech revolution.
Scalable development. Many DeepTech innovations get stuck at the proof-of-concept stage – not because they are not innovative enough, but because they do not scale to mass production. This requires the right infrastructure, as well as a deep understanding of how products and services can be commercialized.
There are several global companies trying to change entire industries with their inventive proposals. We are witnessing some groundbreaking innovations in autonomous vehicles, food technology, computer vision, artificial intelligence, weather forecasting, clean energy solutions – the list goes on – that will benefit us in the future.
AI is changing the rules of the game in software development: it generates code, automates routine processes, and speeds up product releases. Experts explain what opportunities this opens up for business
In 2018, research company Gartner predicted that software development teams would begin to use AI en masse in the near future. Experts accurately anticipated the future trend. In 2023, 41% of the code base was written by AI, at Google – more than a quarter . As of 2024, AI has already generated 256 billion lines of software code.
In May 2024, the international platform Stack Overflow conducted a survey among more than 65 thousand developers around the world. AI is used unevenly in different areas: 82% use it for writing code, 57% for fixing bugs, 40% for creating documentation, 27% for testing, less than 5% for deploying and monitoring application operation. The respondents named documentation (81%), testing (80%) and writing (76%) code as the most promising areas for the near future.
Scope of AI application
Modern AI tools change the functionality of each specialist in the development team – analyst, programmer, tester. “The role of the engineer is shifting from routine development to managing the architecture and quality of the project. And artificial intelligence takes on tasks that can be quickly and easily automated,” said Dmitry Medvedev, Director of the Applied Solutions Department at Lanit-Tercom.
The greatest enthusiasm is caused by the ability to generate working code. “This is an area where you don’t even need to prove anything to anyone, it’s so obvious to everyone that AI speeds up processes,” stated Vladislav Balayev, head of practice at the Lanit Big Data and Artificial Intelligence Competence Center. “On average, by 50%, meaning that a developer can already perform twice as many tasks. Routine operations (writing simple tests or restructuring code) can be almost entirely delegated to generative tools.”
“If you have a startup and have a clearly defined set of rules and requirements, then, in principle, artificial intelligence can generate a quality project from zero to MVP (minimum viable product. — Ts Trends ),” says Dmitry Medvedev. “In the future, it will certainly be necessary to involve more experienced developers and architects to improve and launch the product into operation.”
There are a number of AI tools that help the developer. Popular foreign services include Cursor, Windsurf, GitHub Copilot. There are also products – GigaCode, SourceCraft Code Assistant, Kodify.
At the same time, the scope of AI application in development is diverse, as evidenced, in particular, by a study conducted by Lanit-BPM in 2024. The company said that AI tools can not only write code at the level of a junior developer, but also explain algorithms, generate unit tests, test cases, documentation, decipher recordings of meetings with customers, and answer questions on project documentation.
Alexander Nozik noted that more and more studies are appearing now showing that the main benefit of AI is in searching for information and solving secondary problems. “For example, programmers really don’t like writing documentation, but language models (not even large ones, but local ones) cope with this very well,” he noted.
In prototyping, the use of AI reduces the time it takes to create an MVP from several months to weeks or days, Dmitry Medvedev said. In addition, AI can help improve the quality of the code: it analyzes historical data, identifies vulnerabilities, and predicts potential errors, which reduces the number of bugs and increases the reliability of products.
AI is also being implemented in the work of analysts: companies are experimenting, looking for tasks that can be automated, Vladislav Balayev emphasized. Neural tools can help analysts in recording and summarizing meetings, searching the knowledge base and other routine processes.
One such tool is Landev AI’s Silicon Assistants platform. It allows you to locally deploy large language models (LLM), including code generation models, and use them in both chat mode and complex document, audio, and image processing pipelines. This allows employees to safely test hypotheses and share ideas within the team.
For example, the platform can be used at the stage of collecting and analyzing customer requirements, says Vladislav Balayev: “The customer describes his ideas, he is asked questions. And then you need to make a summary from this – and this process is accelerated four times due to AI, and AI can work on several projects at once.” A promising direction is to formalize the result in the form of a ready-made specification, added Alexander Lutai.
The use of AI has its limitations and disadvantages. It is important to remember that models are trained on open existing code, which may contain vulnerabilities, and, accordingly, reproduce them, warns Alexander Lutai. “AI-generated code is often fragile, it breaks with small changes in the task statement. Solving complex tasks using AI is much more labor-intensive than classical methods,” Alexander Nozik noted.
Experts agree: AI is useful because it frees employees from performing standard tasks and automates routine work. “Of course, a developer should retain expertise in software development, have a good knowledge of the programming languages used in the project, and be able to write basic constructions,” noted Alexander Lutai. “But if all the code is written manually, it will take too much time. AI tools can act as assistants to the developer: he will have more time for more creative tasks that will add value to the company — improving the product or coming up with a new one, responding to feedback from users.”
Safety and possible risks
Neural assistants consist of two parts, explains Alexander Lutai. The first part is a development environment or interface where the AI assistant can be integrated. The second part is the actual large language model, which can be hosted either in the cloud or locally.
Interaction with the cloud model assumes that some information — a developer’s request, a code base — will go beyond the company’s perimeter. “For some, this is unacceptable. In the case of locally deployed LLMs, this risk is eliminated, but resources are required. A model with a size of 8-14 billion parameters can be deployed on a fairly good computer, for larger models you need to buy a server. This costs money,” noted Alexander Lutai.
“There is a good phrase: “There are no clouds, there are other people’s computers,” Nikolai Kostrigin reminded. “Of course, for processing official and especially confidential information, it is better to form your own infrastructure, although it is more expensive. For example, in the case of research into the development of secure software, when the processed data potentially contains information about vulnerabilities in the code, at least to guarantee the preservation of the embargo during the period of responsible disclosure.”
However, it is obvious that public resources are being used and will continue to be used – at least to reduce development costs, the expert added.
“When you send something outside, you take a risk: the place you send it to can be hacked, your message can be intercepted in the middle. A separate issue is that from the point of view of our country’s security, it is simply impossible to send code to external models, especially in government projects,” Vladislav Balayev emphasized. This creates risks of intellectual property leakage and inclusion of elements in the code that violate license agreements: the generated code may contain a fragment protected by copyright, says Dmitry Medvedev.
For sensitive code bases in corporations, the use of commercial network large models is usually not considered at all – large companies rely on the deployment of local models, notes Alexander Nozik.
Implementing AI: Expert Advice
For entrepreneurs and investors, the increasing spread of AI means a fundamental shift in approaches to creating digital products. “If developers do not learn to operate with large language models, generate code, use certain editors or plugins for this, then they will simply become uncompetitive in the coming months, they will lose momentum,” warns Vladislav Balayev.
At the same time, experts emphasize: it is important to correctly use the capabilities of AI. “The main danger here is to try to solve all problems with the help of AI. This usually only leads to increased costs,” says Alexander Nozik. For the successful implementation of AI, it is necessary to conduct a study of business processes and find fairly simple tasks that can be entrusted to it, he noted.
It is very important to have a clear understanding of where artificial intelligence can be used, Dmitry Medvedev noted: “AI will not take on all the tasks. You will still need employees to monitor the results, and you need to clearly define the area where AI will be implemented.”
Effective use of AI requires the ability to restructure thinking, experts note. “First, you need to understand where the boundaries of the data that can be given to external services are,” advises Alexander Lutai. “Then invest in training employees in the correct communication with models, writing prompts. You can use cloud LLM in those issues where compliance allows it. And thus, specifically for yourself, feel out those areas of application where LLM helps to solve problems faster.”
The scenarios that have proven effective need to be used to form a knowledge base, the speaker continues: “People will start using them. Because if you simply give access to the model, it will be difficult for most employees to trust this tool and start using it effectively.” And for the data that cannot be given outside, it is necessary to select a suitable LLM, deploy it within the company’s perimeter, and then create more specialized solutions based on it, added Alexander Lutai. In all this work, it is best to seek qualified advice from professionals, experts emphasize.
Prospects
Artificial intelligence has already become an integral part of the software development process, changing traditional approaches and increasing the efficiency of teams. “Now this is not just a new trend, but stable and effective work in the product environment,” Dmitry Medvedev noted. “I think the role of AI will only increase in the near future.”
The future belongs to hybrid solutions, where neural networks complement human skills. “Artificial intelligence is a support tool, not a replacement for the developer’s professional experience,” Dmitry Medvedev emphasized. “AI will not take over all functions. It will help in code generation, in relatively simple tasks. But if the developer, programmer, or employee does not understand what AI has generated, this will very quickly lead to a crisis in the project.”
“I think that as tools become more widespread and the hype around them subsides, AI will become as much a given as an IDE (integrated development environment. — Ts Trends ) or static code analyzers,” says Alexander Nozik. “Open-source models are gradually catching up with proprietary ones in terms of quality, so the security problem in terms of a closed circuit will also be solved.”
The world is changing faster than we can get used to. What was surprising yesterday is now being implemented in hospitals, businesses, stores, and homes. Artificial intelligence doesn’t just generate texts — it heals, edits videos, analyzes risks, and helps designers. And 2025 can definitely be called the year when countries began to build “sovereign AI” and launch a new space economy.
In this article, we’ll highlight 15 technological changes that are already shaping the new reality. We’ll look at examples from different countries to better understand the scale and direction of the movement. And even if some trends seem distant, they’re already knocking on the doors of business, education, and medicine.
AI is moving from hype to practical tools
Just a few years ago, AI was associated with fun: drawing pictures or generating text. In 2025, it is already a full-fledged player – not in the future, but in the present. Businesses are automating routine processes, countries are launching national AI development programs, and technology giants are competing for the palm in creating the most powerful model.
What has changed?
The price of 1 million GPT-4 tokens fell from $36 to $0.25, making AI accessible even to startups.
Anthropic’s Claude 3.5 Sonnet model achieves over 76% accuracy on challenging tasks, outperforming even GPT-4o.
Alibaba’s China Qwen2.5 is ranked number one in the world for open models on Hugging Face .
And also…
The US, China, Australia, Belgium and Brazil have increased their AI funding by 2-5 times over the past year. For example, in Brazil, investment growth was 471%.
Companies like Walmart have already integrated generative AI into daily processes, from updating product catalogs to making personalized recommendations to customers.
China Develops AI and Challenges the US
While American companies compete for AI supremacy, China is rapidly closing the gap. And it’s doing so confidently: with government support, billions in investment, and an ambition to create an alternative to Western models.
Now Chinese products are not just local solutions, but world-class players.
Qwen and Yi models top the world rankings
In the open rating of the Hugging Face platform, Chinese models are already on par with GPT-4, Claude and Mistral. The following stand out:
Qwen (by Alibaba) is a multilingual, open-source model that is easy to customize;
Yi (by 01.AI) is powerful, fast, and already shows great results in reasoning and coding tasks.
They are no longer just used in China – they are becoming attractive to developers around the world who are looking for an open and competitive alternative to American solutions.
AI is part of China’s national strategy
Unlike the West, where AI development is a matter for private companies, in China it is a national issue. The state:
allocates resources to AI education and science,
actively supports AI startups,
oversees development activities related to safety and ethics.
Trust in AI is the new currency of the future
Artificial intelligence has learned to generate texts, write code, and even consult. But can we trust it completely? As AI penetrates medicine, education, and public administration, the demand for explanation of decisions and transparency of models is growing.
The “black box” problem
Most large models today are “black boxes”: they produce results but do not explain the logic behind them. This creates tension in sensitive areas – when it is not convenience that is at stake, but life or justice. Therefore, AI must not only be intelligent, but also understandable.
Who is already working on this?
Key market players are actively seeking solutions:
Anthropic in the Claude 3.5 model emphasizes the interpretability of answers by adding step-by-step reasoning.
OpenAI is exploring how models form their conclusions – in partnership with the academic community.
Google DeepMind is implementing self-checking mechanisms that highlight the logic behind a model’s response.
These efforts are not just technical progress, but an attempt to turn trust in AI into a systemic value. If you are interested in understanding how to work with AI — and how it can be used in a profession that is the future, we invite you to our upcoming events . Choose an online course that will help you master a modern specialty and confidently move towards new opportunities.
The Transition to Sovereign AI
More and more countries are moving from consuming global AI services to creating their own ecosystems. Sovereign AI is not just a matter of digital independence, but a strategic step for security, economy, and development. States want to control data, influence algorithms, and develop local talent.
Who is already building their own AI systems?
A number of countries are not waiting for global solutions – they are creating their own:
Belgium launches national open AI platform for government agencies.
Brazil is investing in its own LLM models for use in education and healthcare.
South Korea is developing neural networks in the Korean language with an emphasis on local queries.
Italy has opened a state-run AI development center based at a university community.
These are examples of a new approach: AI as a state-level infrastructure, not just a commercial tool.
Nvidia is the engine of local ecosystems
Nvidia has become not only a graphics card manufacturer, but also a key player in the creation of national AI hubs . It provides cloud solutions, servers, and complete tech stacks for local AI deployment. With these tools, countries can build AI independently — and faster.
Small Open Source Models Are the Weapon of Startups
Not all companies can afford to use GPT-4 or Claude in full. But the good news is that open-source small language models (SLMs) are becoming a powerful alternative — accessible, flexible, and significantly cheaper . They are opening the door to AI for startups and small businesses.
Meta, Mistral, Microsoft — Drivers of Openness
Several players have made open AI a reality:
Meta, with the LLaMA 3 model, has created one of the highest quality open-source platforms for local execution.
Mistral AI entered the market with compact models that are not inferior to larger ones in many tasks.
Microsoft supports the development of open models in its Azure infrastructure, making it easier for teams without their own servers to get started.
This is a new round of competition: not only the smartest one wins, but also the most accessible one.
AI Becomes Economically Achievable
The price of entry into the world of AI is rapidly decreasing. For example, GPT-4o mini costs only $0.25 per million tokens, which opens up opportunities for prototyping, customization, and scaling even with a minimal budget. Startups can launch their own AI assistants, internal chatbots, content generators – without overpaying for a “big license” . AI has become not only smart, but also financially feasible.
Spatial Computing – A New Era
When the digital and physical worlds begin to merge, we talk about spatial computing. These technologies allow you to work with information as if it were “alive” in the space next to you . Already today, these solutions are moving beyond experiments and into everyday life.
Leaders in spatial technologies
A group of innovative companies has already formed on the market:
Apple Vision Pro sets a new bar for visual experiences in virtual and augmented reality.
Meta Quest 3 is actively promoted in the gaming, educational and collaborative spheres.
Google Project Starline allows you to communicate via “3D video conferencing” – as if the person were right next to you.
Where does this already work?
Spatial computing is changing not only the interface, but also the way of thinking. It is no longer futurism – spatial technologies are used in real tasks:
In medicine , for example at Boston Children’s Hospital, to simulate complex surgeries.
In architecture and the automotive industry , in particular GM, to visualize projects in real scale.
In education , to create inclusive and deep learning experiences.
Spatial computing changes not only the interface, but also the way we think.
Immersive learning is the new standard for companies
Lecture- and presentation-based training is gradually becoming a thing of the past. It is being replaced by simulation training, augmented reality, and 3D scenarios that engage employees in the process more deeply, quickly, and interestingly. This is not just a new form of delivery — it is a change in the corporate development paradigm.
Technologies that transform learning
Global companies are already implementing immersive tools:
Microsoft HoloLens enables training in AR environments: from engineering to medicine.
NVIDIA Omniverse models manufacturing and team processes in 3D space.
Companies are using VR headsets, simulators and interactive environments to provide training in realistic settings.
Not only are these formats more effective, they are also more memorable and create a positive impression of the process itself. Immersive technologies are changing not only the approach to learning, but also the way we interact with brands. Read more about this in the article “10 Key Trends That Will Change Digital Marketing in 2025” .
Master a modern digital profession
See upcoming events from Genius.Space, choose a convenient program and leave a request for participation
More than 50% of market leaders are already in the game
According to analytics, more than half of Fortune 100 companies already use immersive learning in their internal programs. This includes staff training, crisis drills, security training, technical skills, and soft skills. Companies are investing in experiences, not just information — and it’s working.
Medical Revolution Thanks to AI
Artificial intelligence is already changing medicine not in theory, but in practice. Algorithms do not simply analyze symptoms — they help identify diseases at early stages, when the chances of recovery are much higher. This applies to the most complex diagnoses: depression, oncology, Alzheimer’s disease.
Where does this already work?
These programs do not replace a doctor, but they help make decisions faster, more accurately and more effectively. A number of companies have already brought AI solutions to the market:
Lucem Health (USA) – uses AI to analyze medical data and signals about potential risks.
Sensely (Switzerland) – virtual medical assistants that interact with patients 24/7.
Ubie (Japan and India) – AI platforms for primary diagnostics that are already used in hospitals.
What’s next?
AI is gradually being integrated into every stage of medical care, from screening to treatment. Its ability to work with large amounts of data gives doctors more confidence and support in difficult situations. And perhaps in the near future we will not be talking about “medicine of the future,” but about a new norm — with AI nearby.
Retail and personalization – 1:1 in real time
Classic mailings and banners no longer work as they used to . Modern buyers want to receive offers that exactly match their interests, desires and the moment. That is why retail is massively moving towards AI personalization in real time.
How does it work for market leaders?
Major players have introduced AI assistants into the purchasing process:
Walmart uses chatbots and visual assistants that suggest products based on previous purchases.
Target creates personalized selections right in the app, responding to requests in real time.
Amazon adapts the main page, recommendation blocks, and even prices to each user individually.
Personalization hurts efficiency
According to research, personalized recommendations are three times more effective than mass campaigns. Viewers ignore such offers less often, click more often and buy faster. For brands, this is not a trend, but a new standard in customer communication.
Generative AI in e-commerce
Text descriptions, recommendations, visuals, chat responses — none of this is written manually anymore. In 2025, generative AI will completely change e-commerce, automating processes that used to take hours. This not only saves time, but also scalable personalization for thousands of products at a time.
How does this work in real companies?
E-commerce leaders have already integrated AI into their catalogs, allowing them to respond to trends in real time — without having to hire a full team of marketers.
Walmart has updated over 850 million product descriptions using Large Language Models (LLM).
Generating name variants , selecting tags, SEO texts – all this is done by a ChatGPT-like assistant.
Other companies create visuals, banners , even recommendation carousels using generative models.
AI as a buyer’s assistant
Modern online stores do not just sell — they consult, advise, even entertain. AI assistants answer questions, help choose a size, combine things into an image or choose a gift . And all this — in real time, in a dialogue, without human intervention.
A New Leap Forward in Pharmaceuticals: RNA Therapies
Just a few years ago, RNA technologies seemed like a distant future. Today, RNAi, mRNA, and ASO are already being actively used to treat diseases that were considered incurable. This is a real breakthrough that is changing the approach to medicine at the cellular level.
How do RNA therapies work?
Unlike classic drugs, RNA technologies work at the level of genetic information. They can block “incorrect” proteins, trigger necessary processes, or correct genetic errors. This opens up new ways to treat cancer and rare genetic syndromes.
Who is already implementing it?
RNA therapies are becoming the basis of a new generation of pharmaceuticals – more precise, effective and personalized. Leading companies are no longer just testing these technologies – they are already producing real drugs:
Alnylam (USA) develops RNAi drugs for the treatment of nervous and cardiovascular diseases.
Vico Therapeutics (Netherlands) is working on ASO solutions for genetic disorders.
Chinese biotech companies are actively developing next-generation mRNA vaccines.
Space has become closer
Until recently, space was exclusively the domain of states and scientific agencies. But in 2025, the situation has changed dramatically – private companies are actively exploring orbit, launching satellites, testing new formats of logistics and communications . Technologies that seemed like science fiction yesterday are becoming business.
Who dictates the pace?
Today, the leaders in space launches are not governments, but corporations:
SpaceX carries out more than 70% of all US launches.
Starlink provides satellite internet to dozens of countries, including remote regions.
New startups are creating solutions for cargo transportation, construction in orbit, and deployment of mini-stations.
The private sector has not only caught up with NASA, it is setting the pace and commercial sense of space exploration.
New Horizons in Infrastructure
A separate industry is orbital data centers, which operate without the need for cooling and with minimal data transfer delays. There are also developments underway for orbital refueling systems and space manufacturing, which will allow components to be created without returning to Earth. Space is becoming not a goal, but a medium for innovation.
The New Data Center Boom
AI applications require not only powerful models, but also stable infrastructure. In 2025, the world is experiencing a real boom in the construction of data centers, which are becoming the backbone of the digital economy. Every request to a language model, every visualization or text generation is gigabytes of data and energy that are processed in real time.
Why is infrastructure the new priority?
The emergence of generative AI, spatial computing, and immersive learning has increased the load on servers several times over. This forces companies to invest in new types of data centers — more powerful, more stable, and more environmentally friendly. AI market leaders, cloud platforms, and governments are especially active in this area.
What is the new wave built on?
To ensure stable operation of data centers, companies are looking for alternative energy sources:
Nuclear energy as a long-term stable solution;
Geothermal energy as a green option with a low carbon footprint;
Thermonuclear projects as an investment in the energy of the future.
Liquid cooling and new energy efficiency technologies
The infrastructure for AI is growing at an incredible rate , and with it, the need for server cooling. Traditional ventilation systems can no longer cope with the load, so companies are switching to liquid cooling. This is not only a more efficient but also a more energy-efficient solution, which is becoming the standard of the future.
What is liquid cooling?
Instead of simply blowing air over the servers, the system uses special liquids or water loops that effectively reduce the temperature of the equipment.
This reduces energy consumption, reduces the risk of overheating and increases server density, while the technology operates quietly and smoothly – ideal for high-load data centers.
A trend that is becoming the norm
Analysts predict that by 2026, more than 38% of data centers in the world will switch to liquid cooling . This is due not only to energy savings, but also to the environmental requirements of the market: companies are increasingly reporting their carbon footprint. Green data centers are becoming not just an image element, but a criterion for choosing a partner.
Local AI hubs in unexpected countries
Artificial intelligence is no longer just Silicon Valley . In 2025, local AI hubs will appear in countries that were not previously associated with high technology. This is a new stage of AI globalization – when each country gets a chance to create its own intellectual ecosystem.
Where is growth already happening?
The most actively developing are:
India is creating national AI centers at universities and launching startups with global ambitions.
Norway – Invests in green data centers and AI solutions for sustainable development.
Brazil and Italy are forming hubs for localized AI products and support for small businesses.
In the end
Artificial intelligence is no longer a futuristic dream, but a reality that changes business, medicine, education, and even the idea of everyday things. We live in a time when technology is developing faster than we can adapt. But this is precisely where the greatest opportunity lies: the first to understand the principles of the new era will gain a real advantage. And knowledge about AI today is not a theory, but a real tool for the future.
If you want to confidently move towards a new profession, we invite you to our upcoming events . There you will be able to learn more about modern trends, communicate with mentors and choose a course that suits you. Start changing your professional life now – the future is being created today.
“Time is money”, “time heals”, “time drags on” – our whole life is subordinated to the running hand on the clock, but humanity still knows very little about what time is
What is time – in simple words
For the vast majority of people, time is an intuitively understandable quantity that characterizes the past, present, and future. There is no single definition of time in the scientific space yet. Moreover, representatives of different branches of science have almost opposite opinions on this matter.
For example, in classical physics, time is an absolute and unchanging value based on a certain sequence of events that occur at equal intervals. It is on this principle of periodicity that clocks are based. At the same time, Einstein’s theory of relativity says the opposite: time can change depending on the observer and the coordinate system. In this context, we can talk about the expansion and even slowing down of time.
In philosophy, time is the universal form of being and the flow of all mechanical, organic and mental processes. Time is a condition for movement, change and development, and only in one direction – forward.
Some scientists argue that the way we perceive time is just an illusion, an artifact of our consciousness. According to American physicist Sean Carroll, what we experience as the flow of time is a by-product of our brains as we process sensory information from our environment.
Aristotle pondered over what time is almost two and a half millennia ago. “Time is the most unknown of all unknown things,” said the Greek philosopher. Little has changed since then; scientists continue to conduct experiments and make calculations to catch time by the tail, subjecting it to rational explanation.
How the flow of time works
The physical equations of classical mechanics work equally well whether time moves into the future or into the past. However, in reality, seconds only go in one direction, determined by the thermodynamic arrow of time, and this arrow leads exclusively forward. Let’s figure out why it is impossible to turn back time and what can still be done to change the flow of time.
Linearity of time
The reason for the irreversibility of time is that the natural world obeys the laws of thermodynamics, according to which all processes in the world tend from an ordered system to chaos, that is, to an increase in entropy. In other words, a fallen leaf will never grow back to a branch, dust will not gather into paper, and the Universe can never return to exactly the same state it was in at the previous moment.
Einstein’s Theory of Relativity
In classical mechanics, time is absolute, uniform, and unchanging, and all synchronous clocks tick at the same rate. However, we know from Einstein’s theory of relativity that time depends on the position of the observer – in other words, clocks tick at different rates depending on who is wearing them. For example, if the observer experiences a large acceleration or is near a black hole with strong gravitational forces, time can change, slow down, stop, and even reverse. But there is also more tangible evidence that time can be made to tick slightly slower
Is it possible to slow down time?
In 1971, two scientists, physicist Joseph Hafeli and astronomer Richard Keating, measured the time difference between a super-precise cesium atomic clock that flew around the world on a jet plane and another clock that remained on the surface of the earth. It turned out that the clock on the flying plane ran slower than the one at rest below. Twenty-five years later, the experiment was repeated with even more precise equipment and Einstein’s theory of relativity was once again proven.
If one of the twins went on a journey in a spaceship and traveled for several years at a speed close to the speed of light, then upon returning to Earth he would be younger than his brother on Earth. Thus, NASA astronaut Scott Kelly actually lived a few milliseconds less than his twin Mark, thanks to the fact that he spent more time in space, traveling at a speed of about 28.1 thousand km/h.
Is time travel possible?
Time travel is not only a figment of the imagination of sci-fi fans, but also a real object of scientific research. However, scientists understand a time machine not exactly as we saw in the films Back to the Future or Star Trek. The closest to scientific ideas were the creators of the film Interstellar, describing movement through black holes. Einstein claimed that supermassive objects such as black holes really do bend time around themselves. Therefore, near a hole, time moves more slowly, or even stops altogether.
There is a hypothesis that it is possible to travel through time through “wormholes” or “wormholes” of space-time. They are similar to tunnels, through which a person can get into another dimension. However, such “wormholes” instantly collapse, and it is assumed that only ultra-small particles can pass through them.
Another time travel hypothesis is the “infinite Tipler cylinder.” Astronomer Frank Tipler proposed a mechanism in which matter equal to ten solar masses, rolled into an infinitely long and very dense cylinder, rotates at a speed of 1 billion revolutions per minute. A spacecraft, having passed through a spiral, would enter the cylinder and end up in a “closed timelike curve” – a world line where everything returns to its original point in space-time.
All these hypotheses are doomed to exist only in theory, since a person can neither approach a black hole, nor reach the speed of light, nor acquire supermass. In addition, time travel can lead to major problems – from the release of radiation, fatal to all living things, to the “paradox of the murdered grandfather.” Its essence is that a time traveler returns to the past, kills his grandfather in his youth and as a result disappears – because he was never born. However, this theory has opponents. They argue that it is possible to travel to the past only along a closed curve, inside which all events are looped with each other and one cannot interfere with the other.
How Living Organisms Perceive Time
The human brain has learned to track time itself, adjusting its biological rhythms to the time of day. The perception of time is affected by the level of dopamine in the body. This hormone-neurotransmitter is responsible for a good mood and is produced when a person anticipates something pleasant. The higher the level of dopamine, the slower time flows for a person. People with some mental disorders have a disturbed sense of time: for the especially impulsive, time goes too quickly, and in schizophrenia, it slows down significantly.
As we age, dopamine production declines and the brain becomes less responsive to stimuli. This may explain why older people complain about time passing too quickly.
Animals also perceive time differently. Scientists from the National University of Ireland in Galway compared how quickly animals of more than 100 species react to changes occurring around them and process the information received. It turned out that blowflies and dragonflies perceived time the fastest. Their vision allows them to process changes at a frequency of 300 Hz, that is, to record 300 times per second. Humans, by comparison, perceive the world at a frequency of only 65 Hz, dogs – 75 Hz, and the “slowest” eyes belong to starfish – only 0.7 Hz.
Does time have a beginning and an end?
The beginning of everything, including time, is considered to be the Big Bang: 13.8 billion years ago, the observable Universe began to expand and continues to grow with constant acceleration – no one knows when it will reach its limit. If the Universe expands forever, then time will continue along with it. But if a new Big Bang or another doomsday scenario occurs, our timeline will end and a new one will begin.
Technological trends of the year in the new review of the “Business Journal”. We present a comprehensive forecast of the main innovations and techno-trends that will be used and implemented this year.
The trend forecast will give us insight into how tech leaders, engineers and innovators in new technologies will shape our lives in the future.
More than 15 key technology trends of 2025 that entrepreneurs should pay attention to:
Content:
Democratizing Artificial Intelligence (AI)
Few innovations have generated as much interest and growth in the last few years as artificial intelligence (AI). AI technology is making inroads into finance, healthcare, manufacturing, and dozens of other industries. By comparison, AI adoption rates today are 2.5 times higher than they were in 2017, with about 50% of organizations implementing AI in at least one business function.
And this technology is no longer just for big companies; today, AI is available to everyone. Thanks to open-source AI solutions and lower cost and complexity of systems, the democratization of AI is in full swing.
A prime example is OpenAI, an artificial intelligence research company currently valued at over $29 billion. The company expects to reach $1 billion in net revenue in 2024.
OpenAI’s ChatGPT surprised the world when it was released in November 2022. The chatbot’s ability to take natural language prompts and generate conversational text for a variety of outcomes made people rethink what AI was capable of. More than 100 million people used ChatGPT within the first two months of its release.
ChatGPT is one of the most sensational upcoming trends in the tech industry. It has made a splash in the world of automated chatbots with its seamless GPT3 technology, an autoregressive language model that gives AI “human-like capabilities” to understand and create texts.
Additionally, using deep learning, ChatGPT can generate images from text prompts, solve math equations, and perform a variety of writing tasks, from finding clever video titles to writing poetry.
Following ChatGPT, Google launched its own AI chatbot called Bard. Microsoft also released a Bing chatbot using OpenAI technology. Even Meta is joining the AI race and plans to release its meta-model of a large language for professionals in government, academia, and research.
This type of natural language technology can revolutionize business operations. For example, customer service representatives can use it to respond to customer inquiries in seconds. Companies can use it to create personalized marketing and educational content without having to hire copywriters. Developers can use it to write complex code, and business leaders can use it to analyze data.
Cyber threats are becoming more advanced
Cybercrime is a constant and growing threat, from attacks on casual consumers to state-sponsored cyberwarfare. A 2022 global survey by Hiscox found that 43% of companies reported a cyberattack in 2021, and 48% reported at least one in 2022.
The most frightening statistic from the report was that 20% of attacked organizations said the cost of damage threatened their solvency.
Deepfake attacks are one of the most sophisticated ways hackers gain access to businesses. In a 2022 survey conducted by VMware, 66% of participating IT leaders said they had experienced a deepfake-related attack in the past 12 months. That’s up 13% from 2021.
Deepfake technology uses AI/deep learning to create convincing videos, images, and audio of fake events and people. This technology has been around for a few years now, and it’s getting better and better for hackers.
Cybercrime has become so widespread that the $155 billion cybersecurity industry is expected to grow to $376 billion by 2029, according to analysts and experts.
A type of machine learning technology called generative adversarial networks makes deepfake models nearly impossible to detect. Additionally, the advent of 5G networks makes it easier to manipulate video in real time.
Deepfakes are particularly useful for cybercriminals who commit BEC (business email compromise) scams. Manipulating face-to-face verification methods with deepfakes is another opportunity for cybercriminals.
Detecting deepfakes and other incoming threats is very much a defensive play for organizations. Security professionals are always one step behind the attackers. However, cybersecurity professionals are using artificial intelligence and other advanced technology solutions as early as possible to detect and stop attacks.
A 2022 IBM report found that organizations that use AI tools along with automation reduce the breach lifecycle by 74 days and save $3 million compared to those that do not use these cybersecurity solutions.
Not only can AI tools recognize attacks before human operators, they can also be configured to stop an attack and alert IT staff before a breach gets out of control.
Ambient computing is an IoT-based concept that promises a future of nearly invisible technology. Here’s why: ambient computing is an AI-driven network of devices and software that runs in the background (around us) with little or no human intervention.
Ambient computing uses both artificial intelligence and machine learning to interpret data collected from physical devices, such as smart thermometers and smartwatches, and make decisions on their own.
All of these technologies come together to create devices that can interact with both people and other devices.
It’s no surprise that with the potential to change the way we interact with everything from coffee makers to trucks, the ambient intelligence industry is expected to grow at an impressive 32% CAGR through 2028, reaching a total value of $225 billion.
Ambient or “outer computing” is still a young technology, but use cases are already being seen in both consumer-facing and business solutions.
Voice assistants and smartphone-controlled thermostats are great examples of ambient computing, but the technology could become even more invisible.
For example, a person getting off a plane could be automatically alerted that their luggage is ready at a certain carousel. Once their luggage is picked up, they will receive another notification that their rideshare is ready at a certain location. While they are in the car, a coffee is automatically ordered at a nearby Starbucks, they are automatically checked into a hotel.
In another scenario, homeowners may no longer need garage door openers. Instead, the owner’s smartphone will communicate their location to a home device that will open the garage door for them when they approach.
In factories, ambient computing is used to monitor and schedule machine maintenance. When an IoT device detects that a machine needs maintenance, it communicates with the software that schedules the maintenance and enters the machine into the schedule.
In retail, sensors on shelves can automatically order new stock when supplies run low. However, as this technology evolves, privacy and security concerns will be paramount.
Using Low-Code or No-Code AI
In 2024, artificial intelligence (AI) will shed its technical jargon and use simple drag-and-drop interfaces, leading to code-free AI. Today, everyone uses computers without understanding the background programming of operating systems. Likewise, AI operations and decisions will become more functional, and programmers will not have to write a single line of code.
Its growing acceptance among non-experts will allow more industries to fully utilize the capabilities of AI-based intelligence and create smarter products. No-code AI has already penetrated the market with its user-friendly interfaces in various fields such as retail and website development.
By combining visual models with AI-powered tools, software developers using LCNC can skip, or at least speed up, the labor-intensive process of writing thousands of lines of code from scratch.
Because LCNC solutions are designed to be user-friendly, more and more non-developers are also able to create programs. These non-IT workers are called “citizen developers.” Gartner predicts that large enterprises will soon employ four times as many citizen developers as professional developers.
An Infragistics survey found that over 76% of organizations are already using LCNC. Another proof of the rapid growth in popularity is the growth of the low-code market. Thus, in 2023, the market size has grown to $27 billion, which is 20% more than in 2022.
LCNC solutions are based on so-called low-code application platforms (LCAPs). These platforms are component-based and offer predefined templates.
Marketing in the metaverse never paid off
A 2021 Bloomberg analysis suggests that the metaverse market will reach a staggering $800 billion by 2024. While there is a lot of hype and advertising surrounding the metaverse, it is still clear that investors are continuing to invest in the technology.
As a result, the metaverse turned out to be another inflated “market bubble” in which investors lost huge amounts of money.
The Metaverse is a virtual 3D simulation where people can interact with each other across multiple platforms. In the age of Internet 3.0, advertisers are realizing the endless marketing possibilities of this immersive experience, making it a source of brand awareness and engagement.
Brands like Nikeland are already tracking choices and consumption patterns in their metaverse stores using various forms of artificial intelligence and virtual reality (VR). Others are also looking to enhance the user experience by connecting their physical stores to the metaverse using QR codes.
Augmented Reality Goes Beyond Entertainment
As the lines between mixed reality, augmented reality, and virtual reality blur, the term “extended reality” has become an umbrella term that encompasses all of the above and more.
In the next few years, augmented reality will have potential applications in everything from the metaverse to virtual concerts.
First, there are several B2B use cases where augmented reality can lead to major innovations. One in particular is the medical field. Just one application uses augmented reality to train both new and existing surgeons.
This technology could prove very useful in helping surgeons understand the intricacies of new, innovative surgeries that they may not have learned about in school or previous positions.
A study in Clinical Orthopedics and Related Research found that virtual reality training significantly improved surgical accuracy and completion speed. Another study found that virtual reality training improved surgical performance by 230%.
The automotive and manufacturing industries are two other sectors where augmented reality is expected to have an impact in the coming years. Technicians in these industries are required to perform highly complex processes and machine work.
This allows augmented reality technology to be used to provide a detailed, first-person view of these processes in a safe environment where errors can be easily corrected. When technicians can practice assembly processes before stepping foot on the actual line, human error and injury can be prevented.
Digital immune systems
A list of tech trends for 2024 would be incomplete without the implementation of a digital immune system (DIS). This system refers to the entire architecture of methods borrowed from software engineering, automation, development, operations, and analytics. It aims to reduce business risks by neutralizing defects, threats, and vulnerabilities in the system to improve the overall customer experience.
The importance of DIS lies in the automation of various elements of the software system so that it can successfully counteract all kinds of virtual threats. Gartner analysts predict that by 2025, companies that are already implementing DIS will reduce customer downtime by approximately 80%.
Robotic Process Automation (RPA) adoption to continue to grow
As the lines between artificial intelligence and machine learning continue to blur, companies are finding more ways to integrate automation into their processes. And one of the emerging technologies that executives are most excited about is robotic process automation (RPA).
RPA involves training programs to perform or carry out routine, repetitive tasks.
According to analysts’ forecasts, by 2030 the global RPA market will grow to $25 billion with a compound annual growth rate of 36%.
RPA software spending will reach nearly $3 billion in 2022, up 21% from 2021. Surveys show that a fifth of businesses are currently using RPA.
One of the main reasons why businesses are adopting RPA is due to the tight labor market and the need to maximize employee efficiency and productivity. Data shows that the average U.S. company with 500 employees loses more than $1.4 million annually due to time spent on repetitive tasks.
RPA solutions can also save time and money when it comes to low-value, routine tasks performed within a business. For example, when RPA copies and pastes information from a document into a database, the process is faster and the results are more accurate than when humans perform the task.
Estimates show that RPA can increase job capacity by as much as 50%. Not to mention that valuable human talent can be applied to another task.
RPA can also be used to extract data from websites, book appointments, collect information from customers, monitor compliance, and engage employees, to name a few.
Hyperautomation
According to a 2021 PRNewsWite report, the hyperautomation market share will grow to $26.5 billion by 2028, while a Salesforce report predicts that at least 80% of organizations worldwide will adopt hyperautomation to optimize productivity and improve customer satisfaction. Hyperautomation is a key combination of all the latest innovation tools, such as:
No-code AI; Business management platforms and automated workflows; Integration platforms; Intelligent document processing; Natural language processing; Robotic process automation; Process mapping tools.
Growing innovation and investment in clean, green technologies
In 2021 alone, climate tech startups raised nearly $40 billion to combat the effects of climate change. From large-scale industrial conversions to cleaner alternatives (electricity) to the invention of renewable energy sources like green hydrogen, green technologies will dominate the interconnected worlds of industry and innovation.
Robert Alters, CEO of BBVA Open Innovation, a company dedicated to innovation in the field of blockchain, says: “There are two megatrends, decarbonization and technological disruption, that can transform all industries.” This statement confirms the importance of “green” technologies as one of the main technological trends of 2023.
In 2022, the world invested a staggering $1.1 trillion in the low-carbon energy transition. This global investment set a new record and increased by 31% compared to 2021. It was also the first year that clean energy investment equaled that of fossil fuels. In fact, clean tech has gained such momentum that more than 25% of all venture capital is now directed toward clean tech companies.
Industry experts predict that 2023-24 will see an increase in funding and interest in clean technologies. Many of these companies may be working in the green hydrogen space. Hydrogen is the most abundant element on Earth and does not emit CO2 when burned, giving it great potential as a green energy source.
The green hydrogen market is expected to grow at a CAGR of 61% through 2027 and exceed $7 billion in volume.
The Hydrogen Council estimates that around $700 billion in hydrogen-focused investment will be needed to get the world to net zero emissions by 2050.
Carbon capture technology
But while environmental initiatives such as planting trees and switching to hydrogen-powered cars can reduce carbon emissions over time, many experts believe the impact of these efforts alone will be too little, too late.
According to the Brookings Institution, global greenhouse gas emissions in 2022 reached 58 gigatons, the highest level ever recorded. Carbon emissions are well-documented as one of the largest contributors to modern climate change.
To directly address some of these emissions, clean tech leaders are using so-called carbon capture technology to make immediate progress toward reducing and even reversing emissions.
The process involves working with super-emitters, such as power plants and concrete factories, to capture carbon molecules when they would normally be emitted into the air.
Carbon capture can effectively remove up to 90% of the air emissions from power plants and industrial facilities. From there, carbon capture companies isolate and extract the carbon through a variety of chemical processes before reselling it or burying it deep in the ground where it can be converted back into rock.
The rise of edge and quantum computing
A 2022 ReportLinker article estimates that the edge computing market will grow dramatically by 21.6% between 2021 and 2028. Edge computing is a programming paradigm that collects, stores, and processes data at the source of the data, rather than in a centralized server environment. This decentralized process brings insights closer to the actual point of interaction and allows machines to analyze raw data in real time.
Edge computing is everywhere, from smartwatches to computers monitoring traffic flow at intersections. However, tech trends in 2023 could see its radical adoption in the data analytics industry.
Quantum computing is getting closer to real-world applications
Quantum computing has been a topic of discussion since the 1980s. Fast forward to 2024, and the world is finally getting closer to creating real-world applications for this type of computing.
While traditional computers we know today operate on binary code (either 0 or 1), quantum computers use qubits, which allows a piece of data to exist in two states at once (both 0 and 1).
All this technology boils down to is increasing the speed of calculations. Complex calculations performed by modern computers can take millions of years, but with quantum computing they can be solved in minutes.
Technological and financial challenges have dogged the industry, but it has been gaining momentum in recent months. In fact, $35.5 billion has been invested globally in quantum computing technologies in the public and private sectors in 2022.
So far, IBM is the leader in the quantum computing race. Osprey, IBM’s quantum computer, was unveiled in November 2022 and boasts 433 qubits. The company says it will have a computer with 4,000 qubits by 2025.
However, tech experts say quantum computers need millions of qubits to be fully operational. Many experts expect this to happen by 2027.
Alphabet has been running a quantum computing division for the past six years and announced in March 2022 that it would become a separate company called Sandbox AQ. They raised nine figures in funding in 2022 and raised another $500 million by February 2023.
While IBM and Alphabet’s projects look promising, other lesser-known companies are creating competition and raising large amounts of funding in the space. For example, China’s Origin Quantum raised $148.2 million in 2022, the largest amount in the quantum computing industry in a year.
According to McKinsey, up to $700 billion could be invested in quantum computing worldwide by 2035.
In the life sciences industry, quantum computing could be useful for modeling chemical processes, optimizing pharmaceutical drug design, and advancing the development of personalized treatments through genomics.
In the financial sector, quantum computing can significantly reduce market risks, improve fraud detection and speed up customer onboarding.
Quantum computing could also have different implications for consumers. Take electric vehicle charging, for example. Fully charging an electric vehicle at home takes an average of 10 hours. Even the fastest charging speeds still take 20 minutes.
With quantum technology, charging times could potentially be reduced to 3 minutes at home and a few seconds at high-speed charging stations.
Genomics
In addition to better understanding life and modern health analytics, genomics has also strengthened our understanding of neural networks. In the coming years, rapidly advancing technologies such as pathogen analysis, next-generation sequencing (NGS) genomic data analysis platforms, and scarless genome editing will use AI to decipher hidden genetic codes and patterns, making genomic data analysis and metagenomics the leading aggregators in the biotech industry.
Tech trends in 2024 will also see the rise of functional genomics, which uses epigenome editing to uncover the influence of intergenic regions on biological processes.
Digital Twins to Bridging the Gap Between the Digital and Physical Worlds
Hyperautomation has triggered many 2024 technology trends, one of which is the use of digital twins, which refer to virtual representations of real-world objects. The ubiquity of various data points has created a need for data scientists to virtually observe changes in real-world events and processes by feeding machines large amounts of information.
Cloud service providers have already launched significant digital twin infrastructures, such as Microsoft’s digital twin on the platform or Google’s digital supply chain, to optimize logistics and manufacturing.
A 2022 report from Grand View Research predicts a whopping 37.5% compound annual growth rate between 2023 and 2030, highlighting the growing relevance of digital twins in the day-to-day operations of industries.
Datafication of industries
The datafication of industries is the inevitable culmination of the innovations mentioned in the list of 2024 technology trends. The process refers to the act of transforming or modifying human tasks into data-driven technology. It is the first formative step towards a comprehensive data-driven society.
Workforce analytics, product behavior analytics, transportation analytics, health analytics, etc. are different branches of the same customer-centric analytics culture. The vast number of connected Internet of Things (IoT) devices means multiple data points that enable more effective analysis of a company’s strengths, weaknesses, threats, and opportunities. According to Fittech, the data science industry is becoming a lucrative business model as its market is set to exceed $11 billion in 2022.
Other Tech Trends in 2025
What else is hot in this year’s tech trends? Alternatives to cloud computing, augmented reality, chatbots, and 5G are all innovations that are becoming more advanced and noteworthy, both in business and in people’s lives.
Alternatives to Cloud Computing – Edge and Fog Computing Advanced alternatives to cloud computing are emerging one after another. They simplify and speed up data processing. The most popular today are:
Edge computing – data is processed closer to the request; fog computing – big data management.
5G Network – Speed and Technology Development The 5G network provides the fast data transfer required for the latest technologies and the Internet of Things.
Augmented Reality – The Virtual World in Our Lives Nowadays, augmented reality (AR) is present not only in the entertainment industry. AR is also used by the military, aviation, and medicine.
Chatbots – Customer Support Chatbots, powered by advanced artificial intelligence, are becoming increasingly popular and relevant these days. They are commonly used to improve communication with customers.
Technological trends of 2024 are, first of all, the great importance of artificial intelligence (AI) and innovative solutions based on it. When planning future business activities today, it is important for companies not to forget about both automation and security.
The rapid transformation of technology brings with it new opportunities for the future of businesses and individuals. So, how prepared are you for the technologies that will gain importance in the coming period?
The technological innovations that will become prominent in 2025 and beyond have the potential to transform nearly every aspect of our lives. In this article, using data compiled from global reports and expert analysis, we’ve examined the technologies that are critical for businesses and employees to gain a competitive advantage and, consequently, prepare for the future.
1. The Transformative Power of Artificial Intelligence
Artificial intelligence (AI) will be at the center of the technology world in the coming years. Autonomous and semi-autonomous systems, particularly along with productive AI , will be used to increase efficiency and reduce costs in businesses. Research predicts that by 2028, at least 15% of daily business decisions will be made autonomously by AI.
Prominent Areas of Use in 2025:
Cyber Security: Real-time threat detection and attack prevention.
Education: Strengthening educational activities with smart learning systems.
Software: Assisting and accelerating software development through autonomous systems.
Customer Experience: Improving customer experience with automated and personalized customer support solutions.
Health: Accelerating disease mapping and drug development.
Risks and Challenges Related to Artificial Intelligence:
AI is not only an opportunity but also a risk that must be carefully managed. The ethical and legal management of AI is likely to be a top priority for organizations in 2025. Disinformation, ethical concerns, and security vulnerabilities highlight the need to keep AI technologies under control. Research suggests that disinformation security will be actively addressed by 50% of businesses by 2028 .
2. Quantum Computing: The Key to the Future
Quantum computing has the potential to revolutionize many industries by pushing the boundaries of traditional computing. Data shows that at least one-third of businesses will begin investing in quantum computing by the end of 2025. The introduction of Turkey’s first quantum computer demonstrates that local developments in this field are accelerating in parallel with global advancements.
Major Sectors to Invest in Quantum Computing by the End of 2025:
– Media, Information, Telecom and Technology
– Government/Public Sector
– Financial Services
– Education
Risks and Challenges Associated with Quantum Computing:
Quantum computing will create a major paradigm shift in data security and accelerate the development of new cryptographic standards. Quantum computing is powerful enough to threaten existing security infrastructures. According to experts, most traditional asymmetric cryptography* methods will become insecure by 2029. Companies should prepare for these threats with post-quantum cryptography (PQC) solutions and begin making the necessary investments.
* Cryptography: The technique of hiding or encoding data to ensure that only the person who needs to see the information and has the key to break the code can read it.
3. Robotics and Autonomous Systems
Robotics will continue to expand its influence in both the physical and digital worlds. Robotic technologies are poised to revolutionize both the manufacturing and service sectors. Multifunctional robots will optimize business processes while providing significant cost advantages.
By 2025, 37% of technology leaders are considering implementing humanoid robots in operations, while 35% expect humanoid robots to be implemented and 18% expect them to be fully implemented in operations.
Areas of Use of Robotic and Autonomous Systems:
Warehousing and Logistics: Picking, packing and transporting goods.
Health: Patient care, cleanliness, and handling hazardous situations.
Cyber Security: Autonomous threat detection, response and prevention systems.
Human-Machine Synergy:
Robotic and autonomous systems work together with humans to enable more efficient and flexible operations. Experts predict that these systems will become an integral part of daily business operations by 2030. By 2030, 80% of people will interact with intelligent robots on a daily basis. Today, this figure is less than 10%.
4. Digital Humans and the Metaverse
Digital humans and augmented reality (AR)-based systems offer new opportunities to transform customer experience and business processes. These technologies will have a growing impact, particularly in the retail and education sectors.
Digital Humans and the Potential Benefits of Metaverse Technologies:
E-commerce: Personalized shopping experiences.
Training: Hands-on learning simulations.
Business: Efficient meeting platforms for remote working.
Risks and Challenges of Digital Humans:
Deepfake technologies and digital disinformation present significant threats, as well as the opportunities afforded by digital humans. They are being used globally by cybercriminals in fraudulent activities and disinformation campaigns aimed at political influence, particularly regarding elections. As companies and government agencies consider when and how digital humans will play a role in their operations, they must also promptly incorporate new measures into their planning to address the new cyber threats created by these same technologies.
5. Energy Efficiency and Sustainable Technologies
As digital transformation continues, energy efficiency is becoming increasingly important, and sustainability is a key priority for businesses. Energy-efficient IT systems both reduce costs and minimize environmental impact. This plays a critical role in reducing the carbon footprint of IT operations.
Featured Areas of Use
Sustainable Product Development: Using energy-efficient computing to design products that consume less energy.
IoT Sensors: Energy efficiency through real-time environmental monitoring.
Green Data Centers: Operational and environmental efficiency with lower energy consumption and cost advantages.
6. Hybrid Computer: Combining Power and Flexibility
Hybrid computing stands out as a system that combines different computing technologies to address the complex computing needs faced by modern businesses and individuals. Technologies such as CPU (Central Processing Unit), GPU (Graphics Processing Unit), ASIC (Application-Specific Integrated Circuit), neuromorphic, quantum computing, and photonic systems are combined under the umbrella of hybrid computing, providing both security and flexibility.
According to research, hybrid computer systems will play a critical role in many sectors in 2025 and beyond with their capacity to solve complex computational problems.
Areas of Use of Hybrid Computers
Business and Enterprise Solutions: Hybrid systems provide businesses with a competitive advantage by accelerating big data analysis. They also increase data security with cloud-based backup solutions.
Healthcare: Offers high speed and precision in genetic research and treatment modeling. Enables secure storage and analysis of patient data.
Education: Hybrid systems integrated with quantum computers are used in scientific computations.
Finance and Banking: Hybrid computing is used to model financial risks and markets. Fast and accurate analysis increases the effectiveness of credit processes.
7. Spatial Computing: The Interaction of the Digital and Physical World
Spatial computing is a technology trend that takes on a new dimension by integrating digital content into the physical world. Combined with advancements like augmented reality (AR), mixed reality (MR), and artificial intelligence (AI), it allows users to interact with the real world and the digital world.
Areas of Use of Spatial Computing:
Business: Remote teams can gather in 3D virtual meeting rooms to achieve more interactive and effective work. Companies can create digital copies of physical assets to track performance, predict maintenance needs, and run simulations.
Education: With mixed reality-enabled simulations, students and staff can experience real-life scenarios risk-free.
Retail and e-commerce: Customers can test products they want to buy in their own environments using spatial information technologies. AR-based guidance systems in stores can provide customers with product information and simplify the shopping process.
Healthcare Services: Spatial information systems that monitor the condition of patients in real time without the need for wearable devices enable healthcare personnel to intervene more quickly.
Gaming and Entertainment: Spatial computing transforms physical environments into a digital playground, providing the user with a unique experience.
8. Invisible Intelligence: The Unnoticed Power of Technology
Ambient stealth intelligence is a technology solution where small, low-cost sensors and tags embedded in everyday objects form an ecosystem that operates discreetly. These systems monitor and analyze environmental conditions, providing users with more efficient, personalized, and sustainable living spaces.
Areas of Use of Invisible Intelligence
Retail and e-commerce: Sensors in stores can analyze customer movements to provide personalized recommendations. Automatic reorder systems can be used to prevent out-of-stock items from shelves.
Smart Buildings and Cities: IoT sensors can analyze energy consumption in buildings in real time, optimizing lighting and heating systems. Traffic, waste management, and air quality can be monitored to create more livable cities.
Industrial Applications: Sensors can increase the efficiency of production lines by continuously monitoring the operating status and performance of machines. They can also ensure quality by continuously monitoring product temperature, humidity, and location during transport.
Office Management: Office occupancy rates can be monitored to ensure efficient use of workspaces. Lighting, temperature, and air quality can be automatically optimized for employee comfort.
9. Disinformation Security: A Critical Line of Defense in the Digital Age
Disinformation is rapidly spreading as one of the greatest threats of the digital age. Misleading information not only impacts individuals but can also target businesses, governments, and societies, causing widespread harm. Research shows that disinformation security has become a priority for businesses to rebuild trust in information in the digital world.
Disinformation Security in the Future
Technological advancements: With artificial intelligence and machine learning, disinformation detection systems will become faster and more effective. Blockchain technology will also be used to verify the source of information.
Systems Becoming Mandatory for Businesses: According to research company Gartner, by 2028, 50% of businesses will be using specialized products and services for disinformation security.
New Threats and Solutions: The proliferation of deepfake technology will necessitate the development of more sophisticated detection and verification tools.
10. Neurological Development: The Technology That Redefines Humanity
Neuroenhancement is a branch of science comprised of technologies that read, analyze, and, if necessary, write back brain activity to enhance human cognitive capacity. These technologies have the potential to radically transform learning, decision-making, and productivity processes by enhancing individuals’ capabilities through brain-machine interfaces (BMI). Research in this area predicts that neuroenhancement will revolutionize healthcare, education, and business, in particular.
Advantages of Neurological Development
A Revolution in Healthcare: It can be used to treat brain injury, stroke, and other neurological diseases. Simulations based on brain activity can help surgeons specialize more quickly and safely. It can also be used to create personalized treatment programs to restore motor skills.
Personalization in Education: Personalized educational materials can be presented based on students’ brain activity. Neurological data-backed teaching methods can be developed to enhance the retention of learned information.
Increased Performance in Business: Technologies that increase the brain’s focus can lead to more efficient work processes. Neurological analyses can increase team harmony and employee productivity.
Personal Development and Well-Being: Can help manage issues such as stress, anxiety, and depression. Can offer programs that improve individuals’ memory and problem-solving skills.
11. 5G Private Networks and Industrial Automation
5G private networks offer businesses a dedicated communications infrastructure that supports high bandwidth, low latency, and high device density. This infrastructure will play a critical role, particularly in the success of industrial automation.
Main areas of use:
– Ultra-low latency communication enabling synchronous operation of industrial robots.
– Digitalization and real-time monitoring of production lines.
– Effective operation of autonomous transport systems and storage robots.
– Optimization of inventory management with IoT sensors.
– Smart Cities and Public Services
– Real-time management of traffic lights, waste management and energy systems.
– 5G-based video surveillance and high-speed communication solutions in public safety.
One of the key uses of 5G private networks is industrial automation , enabling production processes to be more efficient, faster, and error-free. Automation solutions powered by 5G technology will accelerate digital transformation, and the leading technologies in this area include:
• Digital Twins:
5G enables real-time updates to digital copies of production facilities and processes, allowing for quick identification and resolution of issues.
• IoT and Sensor-Based Manufacturing:
Optimizing production processes with data collected from IoT devices and sensors via the 5G network.
• Autonomous Robots and Vehicles:
Synchronized operation of robotic systems and autonomous vehicles in industrial areas with high-speed communication.
12. Macro Security, Cyber Anomaly Detection, and Deep Packet Inspection
In an era of rapidly evolving cyber threats, it is critical for organizations to strengthen their security infrastructure with proactive and innovative approaches. Macrosecurity, cyber anomaly detection, and deep packet inspection technologies enable organizations to combat these threats and ensure data integrity. Macrosecurity provides a comprehensive security approach across networks, devices, and applications. Organizations need to assess cyber threats comprehensively and provide solutions that encompass the entire infrastructure.
Cyber Anomaly Detection
Cyber anomaly detection plays a critical role in preventing cyberattacks by identifying abnormal activity in network traffic and user behavior. The following strategies can be implemented in this context:
• Behavioral Analysis:
Development of algorithms that analyze user behavior and detect deviations from normal.
Identifying insider threats and unknown threats.
• Detection with Machine Learning:
Improving anomaly detection accuracy with machine learning-based models.
Rapid adaptation to new types of threats with continuously learning systems.
• Real-Time Monitoring:
Real-time monitoring of all network activities and immediate notification of threats.
In addition to these measures, security measures must be implemented at the package level for corporate data flows. In this context, the following points are of particular importance:
– Analysis of data packets in network traffic at the content level.
– Detection of malware, malicious content and data breaches.
Application Layer Monitoring:
– Monitoring application layer data and providing control over sensitive information with DPI technology.
– Encrypted Traffic Analysis: Development of technologies that can securely analyze SSL/TLS encrypted traffic.
Turn Technology into Opportunity
The technological advancements and digital transformation of 2025 and beyond present both significant opportunities and significant challenges for businesses. At Karel, we stand by you to optimize your business processes and gain competitive advantage by leveraging the opportunities offered by technology. By taking the right steps in your technology-focused strategies with Karel solutions, you can build the future today.
Despite years of observations, dark matter particles have not been caught — but scientists are not stopping. We tell you how modern dark matter detectors are designed and what results their work has already brought
Why is this important?
Dark matter is a hypothetical form of matter that makes up about 27% of the mass-energy of the universe but does not emit or absorb light or interact with electromagnetic fields, making it invisible to conventional observation methods.
Its existence was proposed to explain anomalies in the universe. If you count all the visible matter in a galaxy — stars, gas, and dust — it turns out that the stars at the edges of the galaxy should be moving slowly. But in fact, they are moving faster. This means that there is something else in the galaxy, invisible, that is pulling on the stars. This invisible thing is called dark matter. It is thought to be made of unknown particles, such as weakly interacting massive particles (WIMPs) or axions.
The discovery of dark matter could revolutionize our understanding of physics, confirming or disproving theories such as supersymmetry or revealing new particles. Supersymmetry is the hypothesis that every known particle has a “partner” — a heavier superparticle. In 2025, scientists around the world are continuing to improve detectors to catch the signals of these particles. The success of these experiments could be one of the biggest scientific breakthroughs of the 21st century.
How to Search for the Invisible: Operating Principles of Detectors
The search for dark matter is one of the most ambitious tasks in modern physics. Although we cannot observe these particles directly, scientists assume that in rare cases they may interact with ordinary matter, leaving faint but detectable traces. This principle underlies the work of various detectors, each of which uses its own approach to “catching” these invisible particles.
The most common method is direct detection. It involves underground installations filled with liquid xenon or argon. If a dark matter particle collides with an atomic nucleus or electron inside the detector, it can cause a microscopic flash of light (scintillation) or ionization—the knocking out of electrons. These signals are picked up by ultra-sensitive detectors. To avoid false alarms from background radiation, such experiments are located deep underground, away from cosmic rays and other sources of noise.
Another approach is used to search for axions, hypothetical, as yet undiscovered particles that may also make up dark matter. According to one theory, axions can turn into photons, or particles of light, if they are placed in a very strong magnetic field. So to detect them, scientists create special installations with powerful magnets and sensitive materials that can detect the appearance of photons.
There are also indirect detection methods, in which scientists look not for the dark matter particles themselves, but for what remains after they decay or collide. When such particles collide and destroy each other (a process called annihilation), they can produce other particles, such as neutrinos (which are almost weightless and interact very weakly with matter), gamma rays (high-energy radiation), or even antimatter particles. These signals could come from the center of the Galaxy, the Sun, or other areas where scientists believe dark matter is especially abundant.
Cryogenic detectors use extremely cold crystals of germanium or silicon. When a dark matter particle collides with an atom in the crystal, it can release a tiny amount of heat or electrical charge—a signal that can be detected at temperatures close to absolute zero.
Finally, some scientists are betting on accelerator experiments – these are experiments at huge installations, such as the Large Hadron Collider, where particles are accelerated to near-light speed and collided with each other. Here, conditions are created for the possible birth of dark matter particles in high-energy collisions. Such events are not observed directly, but are recorded by the disappearance of energy or the deviation of particle trajectories.
In all these approaches, the main challenge remains the same: separating the extremely rare signals from much more frequent background processes, including radioactive decays, natural radiation from the Earth, and the influence of cosmic particles. To overcome these difficulties, scientists are developing increasingly sensitive and “quiet” devices that can detect the potential presence of dark matter among thousands of ordinary events.
Leading projects: what’s working right now
XENONnT: Leader in Sensitivity
XENONnT, located in the Gran Sasso underground laboratory in Italy, is one of the most sensitive detectors for searching for WIMPs and light dark matter. It uses 6 tons of liquid xenon, which produces flashes of light and ionization signals when interacting with dark matter particles.
In 2023, XENONnT set the tightest constraints yet on the interaction of light dark matter with electrons (mass less than 0.03 keV) by analyzing events with one or two electrons. These data were collected in two short runs in 2021, each lasting about a month. Although no dark matter signals were detected, the experiment significantly narrowed the range of possible parameters for light particles.
XENONnT continues to collect data, refining its techniques to suppress background noise such as radioactivity and cosmic rays. Its successes make it a leader in the race for direct detection, but the lack of signals highlights the difficulty of the task.
LUX-ZEPLIN: American heavyweight
LUX-ZEPLIN (LZ) is another powerful dark matter experiment located at the underground Sanford Laboratory in the US. It uses 7 tons of liquid xenon to search for dark matter particles called WIMPs (weakly interacting massive particles). LZ is considered cutting-edge because it is one of the most sensitive in the world for searching for heavy particles (10-100 GeV) and competes with XENONnT. It is supported by major US research centers, making it an important player in the field.
In 2023, LZ set new constraints on WIMPs: it showed that particles with masses of 10–100 GeV are unlikely to exist with certain characteristics, as they could not be captured. This helped to rule out some theories about dark matter. In 2024, LZ continued to collect data and improve its analysis methods.
Dark matter has not yet been found, but LZ is preparing for new launches in 2025 to increase the chances of discovery. LZ not only searches for dark matter, but also tests other mysteries of physics. Its results help scientists understand which particles cannot be dark matter and provide new ideas for future experiments.
PandaX-4T: China’s Breakthrough in Dark Matter Search
PandaX-4T is an experiment at the China Jinping Underground Laboratory (CJPL), the deepest in the world, which protects it from cosmic rays. It uses 3.7 tons of liquid xenon to search for dark matter, such as WIMPs and light particles. PandaX-4T is considered a breakthrough because it combines high sensitivity with cutting-edge technology: its detector detects light and electrical signals from particle collisions, and systems to clean the xenon of impurities (such as radon and krypton) reduce the background to a minimum. This allows it to search for signals that other detectors might miss. The project is supported by a team of 40 scientists, including leading universities in China.
In 2024, PandaX-4T completed an analysis of 1.54 ton-years of data (a measure of how much xenon was used and how long it took). They were looking for light dark matter particles (with masses between 0.02 and 10 MeV) that could have received energy from the Sun and collided with electrons in the xenon. They found none, but PandaX-4T set the world’s strictest limits on such particles: their interactions with electrons were weaker than previously thought, with a record limit of 3.51×10⁻³⁹ cm² for a mass of 0.08 MeV.
Also in 2024, PandaX-4T detected signals from solar neutrinos (particles from the Sun) for the first time, helping to understand how they interfere with the search for dark matter. In 2025, the project continues to collect data and improve the detector to search for WIMPs and other phenomena such as neutrinoless double beta decay. This is a hypothetical process in which two neutrons inside an atomic nucleus turn into two protons and emit two electrons — but without a neutrino, unlike normal double beta decay. Detecting it could prove that neutrinos and antineutrinos are the same particle, which would be a major discovery in particle physics.
PandaX-4T is the Chinese leader in the search for dark matter, competing with XENONnT and LZ. Its results rule out some models of light particles and help to understand how neutrinos affect experiments. The unique depth of the laboratory and the purity of xenon make PandaX-4T one of the best in the world for searching for rare signals.
Why has no one caught dark matter yet?
Despite decades of searching and impressive technological advances, scientists have yet to detect a reliable signal from dark matter. The problem is that if particles like WIMPs or axions do exist, they interact with ordinary matter so weakly and infrequently that even the most sensitive instruments could go years without detecting a single event.
Even deep underground, in isolated conditions, it is not possible to completely eliminate background noise. Radioactive decay and cosmic rays still penetrate the installations, creating signals that are difficult to distinguish from a possible trace of dark matter.
To complicate matters, physicists aren’t entirely sure what dark matter is made of. It may not include WIMPs or axions at all, but rather particles of a different nature for which existing methods simply aren’t applicable.
Moreover, the technologies used today have already reached the limits of sensitivity in some mass ranges. To take the next step, science needs new materials, new detection principles, and larger installations – perhaps even a completely new approach to the task itself.
We have collected winning shots from international photography awards that show how photography helps scientists study complex natural phenomena
Nikon Small World
Nikon Small World is one of the most prestigious photomicrography competitions , established by Nikon in 1975. The uniqueness of the award is that the jury evaluates the works from both a scientific and an aesthetic point of view, thus creating a unique bridge between science and art.
Mouse brain tumor cells
Differentiated mouse brain tumor cells (Photo: Bruno Cisterne with assistance from Eric Vitriol/Nikon Small World)
The photo shows mouse brain tumor cells, clearly showing the actin cytoskeleton (thin filaments that support the cell’s shape and facilitate its movement), microtubules (structures through which various substances move) and nuclei that store the cell’s genetic material. The author of the image is Dr. Bruno Cisterne in collaboration with Dr. Eric Vitriol of Augusta University, USA.
Scientists are studying how abnormalities in the cytoskeleton can lead to diseases such as Alzheimer’s and amyotrophic lateral sclerosis. “One of the key challenges in studying neurodegenerative diseases is that their causes are not fully understood,” explains Cisterne. “In order to develop effective treatments, we first need to understand the underlying mechanisms of these diseases. Our study plays an important role in finding this knowledge and opens up prospects for the development of new drugs.”
The image was taken at 100x magnification and was the winner of the Nikon Small World 2024 competition.
The optic nerve of a rodent
The winner of the 2023 Nikon Small World competition is an image of a rodent’s optic nerve taken by Hassanain Kambari and Jaden Dixon of the Lions Eye Institute in Australia. The image, taken at 63x magnification, shows a section of the rodent’s optic nerve head, the point where the optic nerve exits the eyeball.
Special fluorescent dyes are used to visualize collagen fibers (green), nerve cells (red) and the cell nucleus (blue). This image plays a role in the study and treatment of diabetic retinopathy, a retinal damage that occurs with diabetes.
“The visual system is a complex and highly specialized organ, and even relatively minor disturbances in retinal blood flow can lead to catastrophic vision loss. I decided to enter the competition to demonstrate the complexity of retinal microcirculation,” Kambari shares.
Gecko embryo foot
The photo shows the paw of a large Madagascar gecko embryo, its hand is only 3 mm long. The authors of the photo are scientists from the University of Geneva, Dr. Grigory Timin performed the work under the supervision of Dr. Michel Milinkovic. The image is artificially colored: the nerves are highlighted in blue, and the bones, tendons, ligaments, skin and blood cells are in warmer colors.
The shooting method used was confocal microscopy, which allows obtaining clear and high-contrast images by using point lighting and eliminating scattered light. In total, several hundred detailed images were taken, which were then combined into a single image. The entire process took about two days and required almost 200 GB of data.
The photo was the winner of the Nikon Small World 2022 competition.
Welcome Photography
The Wellcome Photography Prize is a photography award organised by the Wellcome Foundation, a British foundation dedicated to health research. The foundation was founded in 1936 with money left as a legacy by pharmaceutical magnate Henry Wellcome.
The competition includes several nominations, one of which is “Wonders of Scientific and Medical Visualization”.
Cholesterol in the liver
The 2025 winner in the Wonders of Scientific and Medical Imaging category is a photograph of cholesterol in the liver. The image shows cholesterol crystals (blue) inside a liver cell (purple). When cholesterol changes from a liquid to a crystalline state, it can build up in blood vessels and damage them, leading to heart attacks and strokes.
Scientific photographer Steve Gschmeissner created the image using an electron microscope, which allows one to see tiny structures with very high resolution.
Triatomine bugs
Pictured is a triatomine bug, common in Latin America. The bite of this insect is dangerous to humans – it is a carrier of Chagas disease.
Chagas disease can lead to serious heart and digestive problems, especially if left untreated. It most often affects low-income people in rural areas of Latin America.
The photograph was taken using a cryoscanning electron microscope, and thanks to artificial coloring, individual cellular structures can be clearly seen.
Photo by Ingrid Augusto, Kildare Rocha de Miranda and Vania da Silva Vieira, researchers from Brazil who study the triatomine bug and hope their work will lead to a better understanding of how to combat Chagas disease.
Air Pollution
Polluted air on Brixton Road in London (Photo: Marina Vitaglione/Wellcome Photography Prize 2025)
The photograph shows fine particles, the most common air pollutant. Artist Marina Vitaglione, together with scientists from Imperial College London, collected air samples from different areas of the city. Vitaglione photographed these samples under a microscope and created prints using hand-made analogue printing with the addition of iron salts. This is why the colour of the photographs acquired a blue tint.
The photograph shows specimens collected from Brixton Road in south London.
“Pollution levels in central London have fallen over the past four years, but a quarter of roads still exceed legal limits for nitrogen dioxide (mostly from diesel engines) and millions of Londoners continue to breathe polluted air. This work aims to visualise the ‘invisible killer’,” says Vitaglione.
Royal Society Publishing Photography Competition
The Royal Society is a British scientific organization founded in 1660. Its mission is to advance science and disseminate scientific knowledge. One of the Royal Society’s modern initiatives is the annual scientific photography competition.
Shark attack
‘Hunting from Above’: A school of fish surrounded by sharks (Photo: Angela Albie. Drone pilot August Paula/Royal Society Publishing Photography Competition)
The shark photo is called “Hunting from Above.” It shows a large school of small fish confronting four young blacktip reef sharks. The image was taken by a drone off the coast of the Maldives.
This photo is part of a scientific study by biologist Angela Albi from the Max Planck Institute in Germany. Albi studies the interactions between blacktip reef sharks and schools of fish in the Maldives. “In this image, the shark on the left suddenly goes from swimming calmly in a school to starting to hunt. It stands out because of its posture,” Albi says . Biologists study photos and videos to understand how sharks hunt and how other fish react to them.
The photograph won first place in the Behaviour category and overall the 2024 Royal Society Publishing Photography Competition.
Constellation Cassiopeia
The Heart and Soul are two nebulae located in the constellation Cassiopeia, approximately 6,000 to 7,000 light years from Earth. The author of the photograph is Imran Sultan, a research fellow at Northwestern University in the United States. He spent almost 14 hours photographing these nebulae to capture their details and features. The photo won in the Astronomy category.
“My astrophotography has given me many opportunities to make astronomy more accessible to a wider audience. The fact that astronomers observed these two nebulae and saw a ‘heart’ and a ‘soul’ in them highlights the human element in astronomy,” Imran says .
Common toads during the mating season
The photograph shows common toads during the breeding season, gathered in large numbers in shallow water.
“This photo was taken in the spring, when I was collecting eggs for experiments with my research team,” says the author of the photo, Ovidiu Dragan, a PhD student at Ovidius University in Constanta. “The whole area was literally overflowing with toads desperately trying to mate. What is especially interesting is that the second toad from the top is a male green toad, not a common toad. He was trying to mate with another species with which they coexist in their natural habitat. This behavior in the mountains was a big surprise for us.”
The photo won second place in the Behavior category. The photo was taken with a phone.
#ScientistAtWork
#ScientistAtWork is an annual international photo contest organized by Nature magazine. Its goal is to showcase scientists at work, whether in a lab, a park, the taiga, fjords, or even Antarctica. The organizers encourage researchers to share photos of their workdays and promise cash prizes to the winners.
Biologists in northern Norway
The winner of Nature’s Scientist at Work photo contest in 2025 is a photo by biologist Audun Rikardsen of his work in a fjord in northern Norway. A team of scientists from the University of Tromsø monitors the migration of herring, which attracts killer whales and humpback whales. They tag the whales with satellite tags, which are deployed with an air gun, to track their movements.
The tags collect data on the whales’ locations, as well as recording dive parameters, duration and depth. Scientists also often perform biopsies, taking tissue samples to monitor the whales’ health.
The work keeps researchers in close proximity to the animals. “You can actually feel their [the whales’] breathing,” says biologist Emma Vogel, who took the photo. “And you hear them before you see them, which is always amazing.”
Defenders of Frogs
This photo is another ScientistAtWork winner. It shows ecologist Kate Belleville of the California Department of Fish and Wildlife holding baby frogs.
In the Lassen National Forest in northern California, a team of scientists and volunteers capture young frogs to bathe them in an antifungal solution. The solution kills the chytrid fungus that is causing mass die-offs of amphibians around the world. After the treatment, the frogs are released back into the wild.
The froglets are also implanted with elastomer tags with a unique combination of colors. These tags form a code that glows under ultraviolet light.
As the researchers note, it is extremely difficult to notice the small frogs – if you do not know what you are looking for, they can easily be mistaken for scurrying crickets. Therefore, the work requires special caution and attention.
We tell you whether flights to asteroids to extract palladium, iridium, niobium and other metals in outer space can pay off and how this market works
Mining asteroids has been described many times in science fiction, from the novel by Soviet writer Alexander Belyaev “Star KEC” to the TV series “Space”. Such a step has long seemed natural and logical. By 2025, science and business have reached the level where science fiction can become reality.
Together with experts, we figure out whether mass mining of minerals on asteroids is really possible and what volumes this market can reach.
Are asteroids really a treasure trove of resources?
Humanity is accustomed to the fact that resources lie deep beneath the Earth’s surface, but not everyone thinks about how exactly they got there. Many years ago, fragments of asteroids fell on the planet during meteorite showers. At the same time, the asteroids themselves – celestial bodies orbiting the Sun – consist of silicate minerals and carbon compounds. It is due to their composition that vast deposits of minerals have accumulated on Earth.
It is important that the huge volumes by the standards of the planet are only small parts of asteroids. More than a million celestial bodies have been discovered to date, their size varies from tens of meters to thousands of kilometers. All this makes space and asteroids in particular a virtually inexhaustible source of minerals.
As explained to RBK Trends at the Russian State Geological Prospecting University named after Sergo Ordzhonikidze (MGRI), asteroids are divided into three main types, and each contains valuable resources. Thus, there are metallic asteroids containing iron, nickel, cobalt, gold, rhodium, and platinum group metals, namely iridium, ruthenium, and platinum. A striking example of such an asteroid is the famous Psyche, one of the largest asteroids discovered by mankind.
In addition, there are carbonaceous asteroids – sources of water and rare earth elements. The most famous of them are Bennu and Ryugu, where the Japanese spacecraft Hayabusa-2 landed several years ago . The third type is stone, containing iron, nickel, cobalt, tellurium, for example the asteroid Itokawa.
What are the actual stocks and needs?
The finiteness of resources on the planet is obvious: the number of people is growing along with the volume of resource consumption, which means that sooner or later they will run out. However, is the situation with minerals on Earth so critical? Experts have different opinions on this matter.
The expert also cited the transport sector as an example. According to him, in order to build 12-15 million electric vehicles on hydrogen fuel cells, it will be necessary to increase the volume of platinum production by half. At the same time, on just one asteroid (6178) 1986 DA, which rotates in an orbit close to the Earth, the reserves of platinum group metals are three times higher than in the Earth’s crust.
“We still perceive space as a kind of frontier, where devices fly to do something important for the Earth and sometimes make forays beyond the Earth’s orbit. However, if we look 20 years into the future, we will see a huge, vibrant industry with active missions beyond orbit to the nearest planets, and perhaps even further,” explained Evgeny Kuznetsov.
Stepan Ustinov, head of the basic department of methods for studying ore deposits at MGRI and candidate of geological and mineralogical sciences, shared his own forecasts for the depletion of the Earth’s resources with Ts Trends. According to him, data from the Russian Federal Agency for Subsoil Use and the US Geological Survey indicate that explored reserves of rare metals, at current consumption, will last humanity 50-100 years. At the same time, easily accessible deposits are gradually being depleted, which forces a transition to the development of poorer ores. This, in turn, increases the cost of extraction.
MGRI believes that humanity will have no choice but to extract minerals from asteroids if three factors combine. The first is a sharp increase in demand, for example as a result of a mass transition to thermonuclear energy, which requires niobium-based superconductors. The second is the accelerated depletion of easily accessible deposits. The third is a breakthrough in space technology, thanks to which the delivery of resources from asteroids will become 10-100 times cheaper.
At the same time, rare metals on Earth will not run out in the next 20–30 years, Ustinov is confident: “Asteroid mining, purely hypothetically, under the most pessimistic forecasts, could become a forced measure only after 2050.”
Mining in Space: Pros and Cons
Experts see both advantages and significant disadvantages in developing asteroid deposits. The advantages often include almost unlimited reserves. In particular, metallic asteroids contain platinum group metal concentrations 10-100 times higher than terrestrial ores. And the Psyche asteroid, according to preliminary estimates, contains metal reserves worth $100,000 quadrillion.
The second advantage is the ability to obtain rare metals, such as osmium and rhodium, which are almost never found on Earth. In Russia, they are obtained at Norilsk Nickel plants as a by-product of Norilsk platinum processing. There are no other sources of osmium and rhodium in the country.
The advantages also include the environmental component. “There are no environmental restrictions for the extraction of minerals on asteroids, since there is no biosphere on space bodies, so any extraction methods can be used: explosive, thermochemical,” explained Stepan Ustinov from MGRI.
Researchers believe that the main disadvantage and limitation is the high cost of mining in space.
“The cost of sending 1 kg of cargo into space is $2-10 thousand. For comparison, mining 1 kg of platinum in Russia costs an average of $20-30 thousand, but it can be sold for $30-40 thousand. Asteroid mining will require billions of dollars in preliminary investments,” the Ordzhonikidze Russian State Geological Prospecting University calculated.
Scientists call the lack of autonomous mining technologies an insurmountable step towards asteroid development. “On Earth, mining, crushing, and enrichment are proven processes. In space, there is no gravity for classical flotation, no atmosphere for smelting, and no fully robotic mines,” Stepan Ustinov explained.
How much money is needed for orbital mining
The key issue in the development of the space mining market remains the return on investment. Futurologists are confident that when the sphere has matured, minerals from space will become a gold mine for investors. But at the initial stages, this will require huge investments with a long payback period.
Evgeny Kuznetsov cited data according to which the extraction of “water as fuel” on asteroids can bring 12-18% of the internal rate of return (IRR). “The profit will grow as the demand for “space fuel” and the scale of production grow. By 2050, this market segment could grow to $0.1-1 trillion, and its control will become key for the further development of humanity’s space expansion,” Kuznetsov told Ts Trends.
Vladimir Komlev, Doctor of Technical Sciences, Professor of the Academy of Sciences, Director of the Baikov Institute of Metallurgy and Materials Science, holds a more conservative opinion. In particular, because today it is practically impossible to estimate how much more expensive or cheaper mining on asteroids will be compared to developing deposits on Earth.
“It is not possible to conduct a comparative analysis of the costs of extracting solid minerals on Earth and in space. Extraction includes many factors, including scientific, technical, technological and logistical ones. It is more likely that extracting minerals on asteroids is an extremely expensive pleasure and is not feasible at the present time,” the scientist believes.
Is it possible to set up mining in space?
Experts do not have a clear opinion on the prospects for asteroid mining: both positive and extremely negative forecasts are common. At the same time, it is impossible to deny the fact that space companies are striving to develop space resources. For these purposes, they organize expensive missions and conduct experiments on Earth.
According to the futurologist, humanity should be thinking about developing deposits in space already now: “When they were discussing possible oil and other deposits in the Arctic about 100 years ago, they also thought about why this was necessary, because there is warm Baku and convenient Persia. However, over time it turned out that these resources are not enough for humanity,” Kuznetsov recalled. “As at the start of Columbus’s journey, it is difficult to calculate specific revenue, but in the end we know how much humanity has advanced thanks to ocean voyages. Space investors now have roughly the same thing in their heads.”
Oleg Mansurov also holds a positive view. Although there is not a single example of a paid-off investment in asteroid mining to date, the CEO of SR Space calls the market a “blue ocean” in which pioneers will be able to make super profits.
“This may happen as early as the 2030s, but for this to happen, we now need to invest in reusable rockets and nuclear reactors for use in space. We have a reserve in these areas in our country, and becoming a leader in supplying the world market with resources mined in space is a very prestigious and significant geopolitical role,” Manusrov is confident.
Stepan Ustinov calls the extraction of minerals on asteroids technically possible, but not yet economically feasible: “If this happens, it will be a long time coming, and the first to be extracted will be water and platinum group metals — they are needed for space missions. Unfortunately, Russia is lagging behind in this race: there are no private startups or state programs for asteroid mining. There is no doubt that the future belongs to those who are the first to make space mining profitable. For now, the leaders are the United States and China, but if breakthrough technologies appear, everything could change dramatically. The main conditions for this will be a reduction in launch costs, automation of mining, and demand for raw materials in space for the construction of stationary extraterrestrial bases.”
Mining and metallurgical companies on Earth can also change the conditions of the space race. And not only by investing in space development, but also by their current expertise. If Stepan Ustinov’s forecast comes true and platinum group metals really start to be mined on asteroids, then Norilsk Nickel’s technologies and competencies can have a critically important impact on Russia’s development as one of the leaders in asteroid mining, since the company is the world’s largest producer of palladium.
Thus, according to the results of 2024, Norilsk Nickel produced 2.8 million ounces of palladium, which is 3% more than the result of 2023, as well as 667 thousand ounces of platinum (+0.5% compared to 2023). It is important that the actual volumes exceeded the forecast for 2024.
Vladimir Komlev also counts on the success of mining on Earth: “It can be assumed that with a high degree of probability, the extraction of minerals will be implemented on asteroids in the distant future. Why not? However, at present, this issue can be classified as science fiction, which is due to the large number of scientific and technical problems that require solutions, including logistical ones.”