Author: zulqarnainbhullar

  • The growing impact of ransomware : Important security news for October 2025

    table of contents

    • 01.Major malware and unauthorized access incidents
    • 02.Other government and industry trends
    • 03.lastly

    In October 2025, various security incidents were reported, including unauthorized access due to misconfiguration of cloud environments and the theft of information by former employees. Among these, ransomware attacks caused a series of business interruptions, severely impacting corporate activities. This article summarizes the major incidents reported in October, as well as government and industry trends, and provides information useful for future security measures.

    Major malware and unauthorized access incidents

    ■ Ransomware attacks

    [Update] Major Beverage Manufacturing Group: System Failure and Possible Personal Information Leak due to Ransomware Attack

    On September 29, 2025, a major beverage manufacturing group was hit by a ransomware attack, causing a system outage. This has disrupted various operations, including order acceptance and shipping, at domestic group companies. The attack only affected systems within Japan, and no damage has been confirmed to have been caused to overseas systems. Furthermore, investigations have confirmed the possibility that personal information may have been illegally transferred, and measures are being taken against those in possession of the relevant information. [1]

    Retail: Ransomware attack halts online orders

    In October 2025, the website of a company selling office supplies was hit by a ransomware attack, completely halting order acceptance and shipping. Other services, such as new member registration and email delivery, were also halted. Furthermore, an investigation into whether any information was leaked is still ongoing. [2]

    Retail industry: Possibility of credit card information being leaked due to unauthorized access

    In August 2024, unauthorized access by a third party was discovered on a retailer’s e-commerce website. As a result, it was confirmed that the credit card information of 12,630 customers who used the e-commerce website between March 2021 and August 2024 may have been leaked. The potentially leaked information included cardholder names, card numbers, expiration dates, security codes, email addresses, passwords, and phone numbers. The company explained that it delayed the announcement because it felt announcing the information at an uncertain stage could cause confusion. It then stated that it decided to make the announcement after confirming the results of the investigation and coordinating with the card companies. The old website has now been closed, and a new website with enhanced security was launched in November 2024. The company is currently working to implement PCI DSS-compliant operations and strengthen monitoring to prevent recurrence. [3]

    ■ Cloud environment incidents

    Delivery service industry: Possibility of personal information leakage due to disclosure of cloud server access keys

    It was discovered that the access keys for the cloud server of a company operating a subscription delivery service were publicly available for approximately one year and seven months, from January 2024 to August 2025. As a result, third parties were reportedly able to access the server. The information potentially leaked included 20,776 items in total, including membership information, address information, and order information such as delivery addresses for some individual and corporate customers, as well as names of contact persons, delivery addresses, and contact information for corporate business partners. The company has disabled the access keys, updated authentication information, and revised its auditing functions. [4]

    Information services industry: Possibility of personal information leakage due to incorrect access permission settings in cloud environments

    On September 2, 2025, an information services company discovered a mistake in the access permissions settings in the cloud environment used to manage employees’ PCs and mobile phones (company-issued devices), allowing third parties to access the information. Potentially leaked information included the names, employee numbers, job titles, departments, and email addresses of employees, temporary workers, and contractors. At this time, no evidence of unauthorized access by third parties has been found, and the company corrected the access permissions on the same day the problem was discovered, completing the response. [5]

    Information services industry: Unauthorized access at an AI-OCR tool provider

    On September 25, 2025, it was discovered that an AI-OCR service provided by an information service provider had been illegally accessed by a third party. Subsequent investigations revealed a possible leak of personal information on October 15. [6]

    ■Other incidents

    Real estate management industry: Possibility of personal information leak due to unauthorized access

    On July 28, 2025, a real estate management company received a report from an investigative agency that files believed to contain personal information were being sold on a dark web site. The investigation into the incident confirmed unauthorized access to the server and information leaks. It was discovered that the attacker had illegally obtained a file containing the administrator password and used that password to gain unauthorized access. The leaked information is said to include approximately 1,900 pieces of data entered into an inquiry form and approximately 4,200 contract list files. [7]

    Logistics service industry: Former employee illegally takes out business partner information

    On October 14, 2025, it was discovered that a former employee of a branch office of a major logistics company had illegally taken information about local business partners and leaked it to two companies. The information taken amounted to approximately 26,790 items, including company names, addresses, and invoice amounts. Of these, 750 included personal names, and 324 included the names of current and former employees. To date, only one case of unauthorized use has been confirmed. [8]

    Logistics services industry: Unauthorized logins due to password list attacks

    On October 15, 2025, it was discovered that a third party had illegally logged into “My Page,” a membership service operated by a major logistics company, using a fraudulently obtained ID and password. There has been no confirmed leakage of credit card or bank account information at this time. The company has identified and blocked the IP address from which the unauthorized access occurred and notified affected members individually. The company is also strengthening measures to prevent recurrence, such as encouraging members to change their passwords and enable two-factor authentication. [9]

    Other government and industry trends

    IPA Announces Registration Status of JVN iPedia, a Vulnerability Countermeasure Information Database, for the Third Quarter of 2025

    The Information-Technology Promotion Agency (IPA) has released the vulnerability countermeasure information registration status for the third quarter of 2025 (July 1 to September 30, 2025) in its vulnerability countermeasure information database, JVN iPedia. The number of vulnerabilities registered in the Japanese version of JVN iPedia during the quarter was 10,869, bringing the cumulative total number of vulnerabilities registered since JVN iPedia’s launch on April 25, 2007, to 253,767. [10]

     

  • Kingston FURY Beast 16GB DDR5 5200MHz CL40, switch to DDR5!

    Kingston designed the Fury Beast line for gamers and enthusiasts. It offers data rates ranging from DDR5-4800 to DDR5-6000, with capacities of 16GB and 32GB. The model presented here is one of the new Fury Beast DDR5 memory modules. It features a black printed circuit board with a matching aluminum heat spreader. While it differs in several ways from the DDR4 version, it remains quite similar. The manufacturer adds a heat spreader with this iteration, which changes the design. Discover here what you need to know about these RAM modules with their aggressive and attractive design.

    Kingston FURY Beast 16GB DDR5 5200MHz CL40, compact design and high compatibility

    The Kingston FURY Beast 16GB DDR5 5200MHz CL40 features a compact design, making it compatible with large CPU air coolers. Furthermore, the component has a low-profile heat spreader, meaning it doesn’t add much height to the PCB. In fact, the Fury Beast is only 34.9mm tall. Each memory module is 16GB with a single-row design. Kingston uses Micron’s MT60B2G8HB-48B (D8BNJ) integrated circuits.

    Performance and XMP 3.0 profiles

    The Kingston Fury Beast DDR5-5200 C40 operates at a base frequency of DDR5-4800 with timings of 40-39-39-76. It features two XMP 3.0 profiles . The first allows it to reach a DDR5-5200 frequency with timings of 40-40-40-80 and a voltage of 1.25V. The second offers a DDR5-4800 configuration with timings of 38-38-38-70 and a voltage of 1.1V. While these performance levels aren’t groundbreaking, this memory kit offers an interesting option for those looking to push the limits of DDR5-4800.Kingston FURY Beast 16 GB DDR5 5200 MHz CL40 (4) - Rue Montgallet

    Kingston FURY Beast 16GB DDR5 5200MHz CL40n: Discreet design and seamless integration

    The Fury Beast’s discreet and elegant design integrates perfectly into high-end gaming systems or workstations, even if some users find it lacking in visual flair. In terms of price, DDR5 remains more expensive than DDR4, although costs have decreased. The Kingston Fury Beast DDR5-5200 C40 benefits from Kingston’s reputation, which has been recognized for its quality and reliability. This should work in favor of the RAM kits. Among other things, the memory kit also offers manual overclocking headroom. It is therefore a solid choice for enthusiasts looking to optimize performance without sacrificing stability.Kingston FURY Beast 16 GB DDR5 5200 MHz CL40 (4) - Rue Montgallet

    Verdict

    OUR RATING:

    The Kingston Fury Beast DDR5-5200 C40 stands out as a solid option for gamers and enthusiasts looking to upgrade to DDR5 without breaking the bank. With its two XMP 3.0 profiles, it offers frequencies up to DDR5-5200 and manual overclocking headroom. Its discreet design and low-profile heat spreader make it compatible with most systems. In short, it’s a reliable and high-performing RAM kit , ideal for those who want to optimize their system without sacrificing stability.

    What we like
    – Low-profile heat sink
    – Average performance
    What we like less
    – Limited space for the OC head
    – No RGB (for some)
  • MSI MAG CORELIQUID E360, a silent AIO for distraction-free gaming

    The MSI MAG CORELIQUID E360 is a liquid cooling system that delivers excellent thermal performance while maintaining a low noise level. Even without power limits, it manages to keep the maximum temperature of the latest Intel and AMD processors below their limits under intensive workloads. Furthermore, this AIO stands out for its ability to remain silent while ensuring efficient cooling. It could be a valuable asset for users seeking optimal PC performance, especially for high-end components.Montgallet Street

    MSI MAG CORELIQUID E360, chic design and easy installation

    The MSI MAG CORELIQUID E360 stands out with its sleek design and robust construction. The cooler measures 394 x 120 x 27 mm, providing ample surface area for heat dissipation. Three 120 mm PWM fans accompany the cooler, ensuring efficient airflow and maintaining a low noise level. Installation is easy thanks to a versatile mounting system compatible with most Intel and AMD sockets. Finally, the rotating top cap, featuring the MSI dragon logo, adds a touch of personalization, allowing users to adjust the logo’s orientation to their preference.Montgallet Street

    MSI MAG CORELIQUID E360, a closed-loop system for guaranteed efficiency and silence

    The MSI CORELIQUID E360 features a closed-loop cooling system, the primary driver of its performance. This system is capable of efficiently cooling even the most demanding CPUs. The pump is located within the radiator to minimize noise, ensuring quiet and efficient operation. Furthermore, the manufacturer guarantees silent operation with this AIO cooler, which comes equipped with dedicated software. This software allows users to adjust settings and monitor component temperatures. Additionally, the fans and the top of the block feature customizable ARGB lighting, enabling synchronization with other system components for a harmonious aesthetic.MSI MAG CORELIQUID E360 - Rue Montgallet

    MSI MAG CORELIQUID E360, low temperatures and minimized noise level

    The MSI MAG CORELIQUID E360 is a truly powerful cooling system. It proves capable of dissipating heat remarkably well. Furthermore, this model keeps CPU temperatures low even under heavy loads. This AIO manages to maintain an optimal balance between cooling performance and noise level. In fact, the fans operate almost inaudibly during standard use and only become audible during intensive operations. This cooling system is perfectly suited to gamers and content creators who demand high performance coupled with a quiet working environment.MSI MAG CORELIQUID E360 - Rue Montgallet

    Verdict

    OUR RATING:

    The MSI MAG CORELIQUID E360 is an efficient and aesthetically pleasing all-in-one liquid cooling solution. It was specifically designed to meet the needs of users who demand both performance and style. Its easy installation comes with a more than satisfactory ability to maintain low CPU temperatures. Its design includes a radiator with an integrated silent pump and customizable ARGB lighting. Despite its price, it’s a wise investment for those seeking a combination of performance, quiet operation, and style.

    What we like
    – Fantastic build quality
    – Satisfactory performance
    – Runs more quietly than other AIOs in its class
    What we like less
    – High investment
  • What is hardening? Explaining security methods and benefits

    table of contents

    • 01.What is hardening?
    • 02.Typical hardening methods
    • 03.The benefits of hardening
    • 04.Points to note when performing hardening
    • 05.summary

    Hardening is the process of increasing the security of an operating system, application, software, etc.

    Typical hardening techniques include applying security patches and reviewing initial settings.

    By implementing hardening, you can not only strengthen security, but also reduce the risk of system downtime and improve performance.

    In this article, we will explain the overview of hardening, the benefits of implementing it, and points to be careful about.

    ▼What you will learn from this article

    • Hardening Overview
    • Hardening Techniques
    • The benefits of hardening
    • Precautions when performing hardening

    If you want to know the basics of hardening, please read this.

    This article also introduces security solutions from LANSCOPE Professional Services, which are effective in providing hardening support.

    If you are a company or organization looking to strengthen your security, please check it out.

    What is hardening?


    Hardening is the process of increasing the security of information systems, including operating systems, applications, and software .

    Specific hardening methods include the following:

    • Disable unused services and software
    • Applying security patches
    • Strengthened access control
    • Reviewing the initial settings

    By implementing hardening, you can not only strengthen security, but also reduce the risk of system downtime and improve performance.

    The purpose of hardening

    Hardening is carried out with the aim of eliminating vulnerabilities in the system as much as possible and reducing security risks.

    To prevent cyber attacks, it is not enough to simply install high-precision security tools; basic measures such as promptly applying security patches and regularly reviewing settings are also essential.

    By thoroughly implementing basic countermeasures, you can reduce the gaps that attackers can exploit and decrease the likelihood of being attacked.

    Typical hardening methods


    Here are five common hardening techniques:

    • Applying security patches
    • Access Control
    • Log monitoring
    • Disable unnecessary services and software
    • Changing the default settings

    I will explain in detail.

    Applying security patches

    Various vulnerabilities are discovered in software and operating systems every day.

    If these vulnerabilities are left unfixed, they could lead to serious damage such as unauthorized access or malware infection.

    To resolve this vulnerability, vendors distribute fixes called “security patches.”

    By applying a security patch, the programs and configuration files containing the vulnerabilities are rewritten, and the vulnerabilities are fixed.

    By promptly applying security patches as they are released, you can reduce the risk of vulnerabilities being exploited.

    Access Control

    To prevent unauthorized access, it is effective to design systems based on the principle of least privilege, which grants only the permissions necessary for business operations.

    By limiting permissions, you can prevent unauthorized viewing or removal of information from both inside and outside the company.

    It is also effective to introduce “multi-factor authentication,” which combines two or more authentication factors, such as entering a standard password followed by an authentication code.

    By implementing multi-factor authentication, even if a password is leaked, only authorized users can log in unless they obtain an additional authentication factor, which is expected to prevent unauthorized access.

    Log monitoring

    The various logs output by systems and applications contain clues to signs of unauthorized access and abnormal behavior.
    Therefore, by continuously monitoring and analyzing these logs, you will be able to detect and respond to problems early.

    For example, being able to detect unusual behavior, such as suspicious communications outside of business hours or a large number of login attempts, will help minimize the damage caused by cyber attacks.

    It will also be an essential source of information for investigating the cause and considering measures to prevent recurrence in the unlikely event of an incident.

    Disable unnecessary services and software

    Leaving unused services and software disabled can become a gateway for cyber attacks.
    For example, if you don’t realize that support for unused software has ended, an unpatched vulnerability could be exploited and your system could be invaded.

    If there are any services or software that have no clear purpose and are left unused, it is recommended that you disable or delete them.

    Changing the default settings

    The “default settings (initial settings)” of products and services often have a low level of security.

    In some cases, your password may be publicly available online.

    Therefore, when initially installing the system, it is necessary to review the settings, change passwords, optimize communication settings, etc.

    Continuing to use the default settings can significantly reduce the security of the entire system.

    The benefits of hardening


    By implementing hardening, you can expect the following benefits:

    • System security enhancements
    • Performance improvements
    • Compliance

    By implementing hardening, you can reduce the risk of attacks such as unauthorized access to your system and data tampering.

    In addition, by continuously monitoring logs and establishing a system that allows for quick investigation of the cause in the unlikely event of any unauthorized activity, damage can be minimized.

    Additionally, disabling unnecessary services and software can help reduce resource waste and improve system performance.

    In addition, by regularly reviewing system settings and maintaining records, you can demonstrate high reliability during audits and external evaluations.

    Implementing hardening is not only effective in strengthening internal security, but also provides peace of mind to customers and business partners and increases trust in the company.

    Points to note when performing hardening


    Hardening is an effective way to increase the security level of a system, but there are some things to be aware of.

    In this article, we will explain three points to keep in mind when implementing hardening.

    • Specialized knowledge is required
    • Continuous management is required
    • There is a risk of reduced business efficiency

    Let’s take a closer look.

    Specialized knowledge is required

    To implement effective hardening, a high level of expertise in the workings of operating systems, networks, applications, and various other software is required.

    If you make incorrect configuration changes to your OS or network without sufficient knowledge, you may end up increasing the risk of attack, even though you are trying to strengthen security.

    There is also the risk that incorrect configurations could cause business operations to stop or important functions to become unavailable.
    Therefore, when implementing hardening, not only is basic security knowledge required, but the ability to understand system configurations and dependencies, as well as the ability to predict the impact of configuration changes, are also required.

    Ideally, you should work with a professional security engineer if possible.

    Continuous management is required

    Hardening is not something that can be done once and then finished.
    It requires continuous review and response in response to software upgrades, changes in system configuration, and updates to vulnerability information.

    In particular, applying security patches and monitoring logs need to become part of daily operations as a habit.

    It is also important to regularly review the scope of impact caused by configuration changes. This
    should be done continuously to prevent problems such as “business interruptions caused by hardening” from being overlooked for a long period of time.

    Furthermore, in order to respond to new security threats, it is necessary to constantly update knowledge and technology.

    It is important to view hardening as part of an ongoing operational process, rather than a one-off measure.

    There is a risk of reduced business efficiency

    While implementing hardening increases security, it can also reduce business flexibility and efficiency.
    For example, prioritizing security too much and introducing a system that reduces business efficiency, or disabling services and software used in business, can lead to confusion if implemented without a plan.

    Therefore, when implementing hardening, you must always be aware of the impact on business operations and consider the balance with convenience.

    It is essential to take into consideration many aspects, such as conducting sufficient tests before and after changing settings and coordinating with relevant departments.

    We also offer a “Vulnerability Assessment Package” and a “Security Health Assessment Package” for companies and organizations that “don’t know where to start” or are undergoing an assessment for the first time.

    For more information about the “Vulnerability Assessment Package” and “Security Health Assessment Package,” please refer to the following pages.

    Penetration Testing

    A penetration test is a test that can determine the extent of damage that will occur in the event of unauthorized access and whether appropriate measures can be taken after infection.

    By launching simulated attacks that simulate actual cyber attacks, we can identify the current security level and any issues.

    LANSCOPE Professional Services’ penetration testing is characterized by high-quality testing by highly specialized and experienced engineers, as well as careful listening and support.

    For more information about penetration testing, please see the following page:

    summary

    In this article, we will discuss the topic of “hardening,” explaining the techniques, benefits, and points to be aware of.

    ▼Summary of this article

    • Hardening is the process of strengthening the security of information systems, including operating systems, applications, and software.
    • Typical methods include “applying security patches,” “access control,” “log monitoring,” “disabling unnecessary services and software,” and “changing initial settings.”
    • By implementing hardening, you can not only strengthen security, but also expect improvements in performance and compliance.
    • Effective hardening requires deep software and network expertise.

    Hardening is not a one-time security measure, but is effective when implemented continuously.

    To reduce the risk of becoming a target of cyber attacks, we should take steps in peacetime to strengthen our security.

    Additionally, the security solutions of LANSCOPE Professional Services are effective for hardening support.

    Make sure you correctly understand whether the software and applications you have implemented have any vulnerabilities, what your company’s security level is, and aim to implement appropriate measures.

    For companies and organizations that have issues such as “I don’t know where to start” or “This is my first time getting diagnosed,” please make use of our materials with flowcharts that allow you to choose the most appropriate diagnosis, as well as our “Vulnerability Assessment Package” and “Security Health Assessment Package.”

  • What is offensive security? A simple explanation

    table of contents

    • 01.What is offensive security?
    • 02.The Importance of Offensive Security
    • 03.Representative offensive security techniques
    • 04.The Benefits of Offensive Security
    • 05.Disadvantages of Offensive Security
    • 06.summary

    Offensive security is a method and approach for evaluating security from an attacker’s perspective and identifying vulnerabilities.

    As cyber-attack methods have become more sophisticated and ingenious in recent years, traditional passive methods alone are no longer sufficient to provide security, and attention is now being focused on offensive security, which is an active defense.

    This article provides a clear explanation of offensive security, including its overview, advantages, and disadvantages.

    ▼What you will learn from this article

    • What does offensive security mean?
    • Representative offensive security techniques
    • Advantages and disadvantages of offensive security

    We will also introduce the security solutions of “LANSCOPE Professional Services,” which are effective for offensive security.

    If you are a company or organization considering introducing offensive security to strengthen your security, please check it out.

    What is offensive security?

    Offensive security is a method of strengthening security by taking an attacker’s perspective, identifying vulnerabilities in a company’s systems and networks, and then correcting those vulnerabilities .

    Penetration testing is a typical offensive security technique.

    Penetration testing is a method of evaluating the security of IT infrastructure such as systems, networks, and applications from an attacker’s perspective.

    By implementing offensive security, organizations can objectively understand their own security level.

    By understanding the real risks through verification based on actual attack scenarios, rather than armchair theories, it will be possible to take effective measures.

    The Importance of Offensive Security

    Offensive security is attracting attention for two reasons:

    • Defensive security measures alone are not enough to deal with the recent cyber attacks.
    • Conventional security measures place a heavy burden on security personnel.

    Cyber-attack methods are becoming more sophisticated and ingenious every year.

    Attackers will continue to use various methods to achieve their goals, such as using unknown malware to evade traditional security measures.

    As a result, traditional security measures that focus on defense (defensive security) alone are no longer sufficient.

    Additionally, defensive security is a reactive approach that responds after an attack has occurred, which tends to place a heavy burden on security departments.

    Constantly monitoring and responding to attacks makes it difficult to find time to make fundamental improvements.

    “Offensive security” addresses the issues that arise when only defensive security is implemented.

    Offensive security aims to strengthen security systems by identifying vulnerable or severe vulnerabilities before they are exploited by attackers and taking measures to address them.

    Implementing offensive security will reduce the frequency of attacks, which will ultimately reduce the burden on the security department.

    Representative offensive security techniques

    Some of the most common offensive security techniques include:

    • Penetration Testing
    • Red Team Exercises
    • Vulnerability Assessment

    Let’s take a closer look at each method.

    Penetration Testing

    Penetration testing is a method of evaluating current security measures by launching attacks on systems and networks from an attacker’s perspective.

    Security engineers mimic attackers and attempt to infiltrate systems by exploiting vulnerabilities.

    Penetration testing can identify the severity of vulnerabilities and the risks that may arise if an attack is successful, leading to stronger countermeasures.

    LANSCOPE Professional Services provides highly accurate penetration testing that creates realistic cyber-attack scenarios and identifies weaknesses and risks in an organization’s systems.

    Details of the service will be described later.

    Red Team Exercises

    Red team exercises are a security technique in which participants are divided into a “red team” that launches simulated attacks on systems based on assumed attackers, and a “blue team” that defends against those attacks, to verify the effectiveness of current security measures and systems.

    The penetration testing mentioned above only evaluates vulnerabilities on the system, so attacks are limited to those on the system.

    On the other hand, red team exercises evaluate not only systems but also the security system of the entire organization, so they can approach not only networks but also offices and physical facilities.

    Additionally, financial institutions and other organizations are now requiring advanced TLPT (Threat-Based Penetration Testing).

    Vulnerability Assessment

    Vulnerability assessment is the process of assessing the presence, type, and threat level of known vulnerabilities lurking in networks and web applications.

    By doing this regularly, you can identify and fix vulnerabilities before they are exploited by attackers.

    There are two types of vulnerability assessment: manual assessment and tool assessment, so it is important to use them appropriately depending on your purpose and internal resources.

    The Benefits of Offensive Security

    Implementing offensive security can be expected to provide the following benefits:

    • Early detection of vulnerabilities
    • Strengthening security systems

    The biggest advantage of offensive security is that it can detect vulnerabilities early.

    By identifying vulnerabilities before they are exploited by attackers and taking measures to address them, it is possible to reduce the probability of attacks occurring and minimize the damage caused.

    Furthermore, by conducting tests and exercises that simulate actual attacks, such as penetration tests and red team exercises, you can identify more effective defensive measures and strengthen your security system.

    Additionally, as a side effect of the simulated attacks, we can expect to see an increase in employees’ security awareness.

    Disadvantages of Offensive Security

    When implementing offensive security, you need to be aware of the following challenges:

    • Costs and personnel preparation required
    • There may be a system problem

    The costs and resources required to implement and operate offensive security can be a significant burden for many companies.

    For example, continuous investment of resources is required, such as securing engineers with specialized knowledge, requesting assistance from external vendors, and making improvements based on the diagnostic results.

    Small and medium-sized enterprises in particular will need to proceed with implementation carefully, assessing the cost-effectiveness.

    Furthermore, even though it is a simulated attack, since the test is conducted under conditions similar to a production environment, there is a risk of unexpected system failures or service outages.

    If any trouble does occur, it will inevitably have an impact on business operations, so it is important to make advance arrangements and have a backup system in place.

    “Vulnerability Assessment” to respond to the latest threats

    LANSCOPE Professional Services’ vulnerability assessment service regularly collects and analyzes incident and vulnerability information, and reflects this information in the assessment rules as appropriate, ensuring that services are always based on the latest security standards.

    We are known for our high-quality diagnoses, boasting a repeat rate of 90% from clients including business companies, development companies, and government agencies.

    In addition, our nationally certified and experienced specialists will comprehensively identify vulnerabilities and security risks hidden in your environment and support you in implementing efficient vulnerability countermeasures.

    The report after the assessment will not only tell you the type and risk level of each threat, but also the appropriate countermeasures, so you can steadily fix vulnerabilities and improve your security level.

    ▼Vulnerability diagnosis service list

    • Web Application Diagnostics
    • Source Code Diagnostics
    • Network Diagnostics
    • Smartphone application diagnosis
    • Game Security Assessment
    • IoT vulnerability diagnosis
    • Penetration Testing
    • Cyber ​​Risk Health Check

    If you are a company or organization that is concerned about vulnerabilities in your environment or services, or would like a professional assessment rather than a diagnostic tool, please contact LANSCOPE Professional Services.

    We will introduce the most suitable plan based on your convenience and budget.

    We also offer a “Security Health Check Package” for customers who want to “first perform a simple vulnerability check.” This allows for a low-cost, short-term check.

    Highly accurate penetration testing by experienced assessors

    LANSCOPE Professional Services creates realistic cyber-attack scenarios and provides highly accurate penetration testing to identify weaknesses and risks in an organization’s systems.

    ▼Penetration test report image

    The “Penetration Test” of “LANSCOPE Professional Services” simulates advanced cyber attacks such as “ransomware” and “APT attacks,” which have been much talked about in recent years, and tests whether the attack objectives can be achieved based on the attack scenario.

    In addition, if a cyber attack actually occurs, it can clarify the extent of damage that will occur and whether appropriate measures can be taken after infection, and it can also suggest optimal security measures based on the results.

    If you would like to know about your company’s vulnerabilities or unknown attack routes, please consider undergoing a penetration test from LANSCOPE Professional Services.

    Our experienced assessors will create the optimal test scenario (attack sequence) to suit your environment and budget.

    summary

    In this article, we will discuss the topic of “offensive security,” explaining its overview, advantages, and disadvantages.

    Summary of this article

    • Offensive security is a security method that identifies vulnerabilities in systems and networks from an attacker’s perspective.
    • Typical methods include “penetration testing,” “red team exercises,” and “vulnerability assessment.”
    • While implementing this is expected to lead to “early detection of vulnerabilities” and “strengthening of security systems,” there are concerns that securing resources will be a bottleneck.

    To combat increasingly sophisticated cyber attacks, it is important to use both defensive and offensive security strategies, rather than adopting just one.

    We encourage you to take advantage of the vulnerability assessment and penetration testing services provided by LANSCOPE Professional Services, as introduced in this article, to build a robust security system.

    For those who want to “first conduct a simple vulnerability assessment” or “don’t know where to start,” we also offer a “security health assessment package” that can be conducted at low cost and in a short period of time.

    Please use this to implement effective offensive security.

  • Explaining security measures and services that should be implemented on AWS

    table of contents

    • 01.Since 2020, the number of users of cloud services, including AWS, has been steadily increasing.
    • 02.Unlike on-premise, the cloud has a “shared security responsibility”
    • 03.AWS’s Shared Responsibility Model for Security
    • 04.How AWS handles security
    • 05.The eight main security services provided by AWS
    • 06.If you want to consult a professional about AWS security settings, we recommend “Cloud Diagnostics”
    • 07.summary

    AWS security measures are essential for companies and users who use AWS . Comprehensive AWS security measures are required to protect customer data and confidential information stored on AWS and prevent security incidents.

    AWS is an abbreviation for Amazon Web Services, a general term for cloud services provided by Amazon.
    Specifically, it provides virtual servers “Amazon Elastic Compute Cloud” (EC2), storage “Amazon S3,” database “Amazon Aurora,” and more.

    However, as the use of AWS expands, security incidents such as unauthorized access and information leaks caused by cloud services are also increasing.

    This article explains the security services provided by AWS that companies should be aware of, as well as the scope of security responsibilities that companies (users) have . If you are concerned about AWS security, be sure to read this article.

    ▼What you will learn from this article

    • The use of cloud services such as AWS is increasing, and with it, the number of data breaches is also increasing.
    • With cloud services, the “security scope” for which users are responsible is reduced.
    • Effective AWS security measures include “strengthening authentication and access rights through correct IAM configuration,” “detecting suspicious behavior through security monitoring,” and “regular application of security patches.”
    • AWS provides a number of security services to protect users’ information assets from various security threats.

    Since 2020, the number of users of cloud services, including AWS, has been steadily increasing.


    With the spread of remote work and changes in working styles, demand for cloud services is increasing not only in Japan but around the world.

    According to data released by the Ministry of Internal Affairs and Communications in 2023, the size of the global cloud service market is expected to continue to grow steadily.

    In addition, in terms of “corporate cloud service usage in Japan,” 44.9% of companies use the cloud company-wide, while 27.3% use it in some offices or departments, meaning that more than 70% of companies use cloud services in some form.Information leaks from cloud services are also on the rise

    As these cloud services become more widespread, there has been an increase in “information leaks” caused by the cloud and cyber attacks targeting holes in cloud services .

    While cloud services have the advantage of being able to be used without having to build an in-house system environment, there are concerns that they tend to be dependent on the security environment of the provider.

    Additionally, each cloud service requires various security settings, such as file access permissions and authentication settings, but many users say they are unsure whether they have configured the settings correctly or have left the settings as they were when the service was first installed.

    In fact, there has been no end to cyber attacks and fraudulent address incidents using cloud services as a springboard, and many of these are caused by “insufficient security settings on the user’s side.”

    Unlike on-premise, the cloud has a “shared security responsibility”


    There is a difference in the “scope of security” for which users are responsible between traditional “on-premises environments” and cloud environments .

    With on-premises systems, users are responsible for building the system and managing the servers themselves, so they have to be responsible for the security of the entire system, including physical resources, software, and data .

    However, with cloud services, the underlying systems and servers are rented from vendors, so the company does not own the physical equipment. Therefore, the only security responsibility the user is responsible for is the software and data that the company builds and owns .

    Therefore, with cloud services, users are less responsible for security than when they owned hardware in an on-premises environment, and can now focus on security measures in a narrower sense.

    At AWS, we call this approach to security responsibility sharing the “shared responsibility model.”

    AWS’s Shared Responsibility Model for Security

    The “Shared Responsibility Model” advocated by AWS is, simply put, aimed at “ensuring the security level of customers’ AWS.”

    • AWS (vendor side)
    • Customer (user)

    This refers to indicators that clarify the scope of each responsibility.

    According to the Shared Responsibility Model, AWS is responsible for security across the cloud infrastructure , while customers are responsible for security within the cloud .

    In other words, security within cloud services

    • OS updates
    • Managing access rights
    • Applying security patches

    All of these items must be handled responsibly by the user .

    AWS’s “Security of the Cloud” Responsibility

    On the other hand, AWS (the vendor) has stated that it is responsible for “securing the infrastructure” that runs all services provided in the AWS cloud.

    Specifically, AWS is responsible for ensuring the security of the hardware, software, networking, and facilities that it provides, and AWS implements these security measures based on international best practices.

    How AWS handles security


    The following are some of the things that users should do as “AWS security measures.”

    • Strengthening authentication and access rights through correct IAM configuration
    • Security-conscious settings for each AWS service
    • Checking the audit log
    • Detecting suspicious behavior through security monitoring
    • Regular application of security patches
    • Educating employees on cloud service usage
    • Check AWS security guidelines and official website

    etc.

    AWS provides a variety of security services that users can use to protect their information assets from various security threats. For details on these services, see ” Security Services Provided by AWS .”

    In addition, to build a strong AWS security system, you need to check the official white papers and guidelines and build the correct settings and security system in accordance with the requirements.

    How to Check AWS Security Compliance Requirements

    AWS has obtained multiple third-party certifications, including ISO 27001 and ISO 22301, as proof of appropriate security management.

    For more information about the security measures AWS implements and how it maintains compliance, see the AWS Compliance Program page.

    If you would like to check whether AWS meets the security and compliance requirements required by your company, you can check the white papers on the official website .

    The eight main security services provided by AWS


    AWS offers a variety of security services to help users meet high security standards.

    These services are effective in managing access rights, compliance, and data protection , where users assume security responsibility under the aforementioned “responsibility model.”

    Some of the most popular AWS security services include:

    • AWS Identity and Access Management (IAM)
    • Amazon GuardDuty
    • Amazon Inspector
    • AWS WAF (Web Application Firewall)
    • Amazon Macie
    • AWS CloudTrail
    • AWS Key Management Service (KMS)
    • Amazon VPC (Virtual Private Cloud)

    Here we introduce the eight main security services provided by AWS.

    1. AWS Identity and Access Management (IAM)

    AWS Identity and Access Management (IAM) is a web service that allows you to securely manage user identities and resources on AWS systems . By using IAM, you can configure “authentication” and “authorization” for your AWS account.

    in particular

    • User registration and account management (issuance, modification, deletion)
    • Identification and authentication when using the system
    • Setting whether or not to allow access to information resources based on the permissions set by the administrator
    • Access log recording

    It consists of functions such as

    All permissions and authentication rules in IAM are managed by “policies,” which are granted to “groups” and “users (accounts).”

    For example, if an employee (user) wants to upload a file to storage, the administrator grants the user the permissions written in the policy. The user can perform specific actions by being given specific permissions through IAM.

    Even for the same file, it is possible to set up the system so that “User A” has no access rights, “User B” has view-only rights, and “User C” has operation rights.

    2. Amazon GuardDuty

    Amazon GuardDuty is a fully managed threat detection service.

    Utilizing machine learning, it monitors API calls and communication logs to detect unauthorized access and malicious behavior.

    There is no need for complicated configuration or installation; you can simply enable it on AWS. We officially recommend enabling Amazon GuardDuty in all supported regions.

    The detection results allow you to check the severity of the threat and details of the detection required for remediation.

    3. Amazon Inspector

    Amazon Inspector is an automated vulnerability management service that continuously diagnoses AWS workloads to check for software vulnerabilities and unintended network information exposure.

    It is used to assess the security compliance of applications and resources running within your AWS cloud environment, helping to identify vulnerabilities and resolve security issues.

    It is possible to automatically diagnose vulnerabilities in Amazon EC2, a virtual server provided by AWS, Amazon ECR, which stores and shares container images, and AWS Lambda, which executes code.

    4. AWS WAF (Web Application Firewall)

    AWS WAF (Web Application Firewall) is a specialized firewall for web applications provided by Amazon. It protects applications from malicious attacks such as SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF).

    AWS WAF also allows you to control access to CloudFront, ALB, API Gateway, etc. By setting access “rules” within the WAF’s “WebACL,” you can determine which network communications to “allow” or “deny.”

    You can start using AWS WAF immediately by simply enabling it on Amazon CloudFront, which accelerates web content such as image files, or on Application Load Balancer, a load balancer that distributes the load of web services.

    5. Amazon Macie

    Amazon Macie is a service that uses machine learning and pattern matching to automate the discovery, classification, and protection of sensitive data in your Amazon S3 buckets.

    It automatically detects and classifies S3 objects containing personal information, allowing you to manage sensitive data safely and efficiently without incurring additional effort.

    When using Amazon Macie, you need to enable it for each region (specific range). Once enabled in a region, you can view the detection results for all accounts in that region at once.

    6. AWS CloudTrail

    AWS CloudTrail is a service that records all user activity and API usage from the time you create an AWS account. It is useful for analyzing and repairing the cause of unauthorized operations or unexpected behavior.

    It’s automatically activated when you create your account, so you don’t need to do anything yourself.

    You can view the operation records for the past 90 days by accessing the AWS CloudTrail console or AWS CLI. Since records are deleted after 90 days, if you want to keep the operation records, you can move them to Amazon S3 for storage.

    7. AWS Key Management Service (KMS)

    AWS Key Management Service (KMS) is a service that allows you to create and manage keys for encrypting, decrypting, and digitally signing data across applications and AWS services. It can encrypt and decrypt text files stored in Amazon EC2, Amazon S3, etc.

    The master key is stored on AWS KMS and cannot be saved locally, so there is no need to worry about it being lost, damaged, or having information stolen.

    It is also integrated with AWS CloudTrail, which I mentioned earlier, allowing you to audit who used what key, when, and with which resource.

    8. Amazon VPC (Virtual Private Cloud)

    Amazon VPC (Virtual Private Cloud) is your own dedicated virtual network space that you can build within your AWS account.

    Within Amazon VPC, you can communicate with AWS services such as the virtual server “Amazon EC2” and the database “RDS.” You can also allow Amazon EC2 instances to communicate internally with each other and manage connections to external networks.

    Spaces built with Amazon VPC allow you to manage networks and resources all at once, which makes operation and maintenance more efficient.

    If you want to consult a professional about AWS security settings, we recommend “Cloud Diagnostics”


    If your company is concerned about AWS security, we recommend that you use the various AWS services introduced above, as well as take advantage of Cloud Diagnostics, which allows experts to identify AWS security flaws and provide advice.

    By having professionals point out and correct inadequate settings in “access permissions” and “authentication,” which are the cause of AWS security incidents, it becomes possible to build a secure AWS environment.

     

    • I want to check whether my company’s AWS settings are correct as they are.
    • There is no one in the company who is knowledgeable about security, and I would like to consult with an expert.
    • I want to strengthen AWS security with as little effort and cost as possible.

    We recommend our “AWS Security Assessment” to companies facing these challenges.

    We identify potential security issues in your AWS environment and our security experts propose appropriate remediation measures to prevent incidents such as unauthorized use of AWS or information leaks due to misconfigurations.

    We conduct detailed assessments based on CIS benchmarks, AWS Security Hub, and our own unique criteria, and our security experts will check the actual management screen and provide you with a detailed, easy-to-understand report.

    For details of the service, please see the following page.

    summary


    In this article, we introduced the importance of security in AWS and official security services.

    The number of users of AWS and other cloud services has been increasing in recent years, and their use will likely become essential in the future. We encourage you to consider implementing appropriate AWS security measures based on your company’s environment and needs.

  • Top Tech Trends That Will Redefine Careers in 2025

    Technological developments are currently disrupting work environments and influencing the nature of careers in diverse fields such as biotechnology, data science, cybersecurity, software development, and engineering. In addition to transforming workflows, advances in AI, quantum computing, and green technologies are also driving demand for specialized profiles. For professionals ready to face the world of work of tomorrow, this rapidly changing landscape presents a wealth of opportunities.

    Table of Contents
    1. Cutting-edge generative AI and machine learning
    2. Sustainable technologies and ecological innovations
    3. Autonomous systems
    4. Quantum computing
    5. Extended Reality (XR)
    6. 5G and edge computing architecture
    7. Innovations in biotechnology and health
    8. Evolution of blockchain technology
    9. Digital twins and simulation technology
    10. Deep tech and spatial computing
    11. Embracing the future of new technologies

    Are you ready to explore new careers and enter the world of spatial computing, autonomous systems, or generative AI? Let’s take a closer look at the technologies transforming the world of work and the careers they create. 

    Cutting-edge generative AI and machine learning

    Generative AI, also known as GenAI, is revolutionizing several industries, including content creation, design, and healthcare. Using cutting-edge machine learning algorithms, artificial intelligence is opening up new possibilities and improving efficiency. AI is transforming what was previously unthinkable, from creating elaborate works of art to writing sophisticated programming code to making medical diagnoses—all with unprecedented levels of accuracy.

    In our article, ”  Can AI Overtake Human Intelligence?  “, we take a closer look at the remarkable advances in AI technology and explore its potential to revolutionize our professional and personal lives. 

    Main applications  : personalized teaching tools, AI-assisted design, and automated coding systems.
    Profiles sought  : generative AI specialists, machine learning engineers, and prompt engineers (query writers).

    Sustainable technologies and ecological innovations

    The development of more virtuous technologies such as carbon capture systems, green clouds, and energy-efficient cloud computing has become necessary amid global pressure for greater sustainability . Additionally, a growing number of sectors, including construction and manufacturing, are investing in sustainable materials and green packaging . The electronics sector is also adopting greener and more energy-efficient practices as more and more companies take action to reduce their carbon footprint and environmental impact.

    By reducing emissions, optimizing resource use and facilitating real-time maintenance and monitoring, these technologies help companies achieve their environmental objectives while increasing their productivity and resilience.

    Main applications  : renewable energy networks , carbon footprint analysis and sustainable manufacturing practices.
    Profiles sought  : sustainable development engineers, green IT consultants and environmental data analysts.

    Autonomous systems

    Autonomous systems are changing the game in manufacturing, logistics, and transportation, from drones to self-driving cars to fully automated factories. By reducing human error and simplifying complex processes, these technologies can increase productivity, improve safety, and reduce operating costs. For example, the food industry is particularly benefiting from automated solutions , including robotic assembly lines, streamlined food production, and safer food products, thanks to the use of AI.

    Smart distribution networks, precision agriculture , and flexible supply chains are just a few examples illustrating the new opportunities created by increasingly sophisticated automated systems.

    Main applications  : smart warehouses, autonomous delivery networks, and precision agriculture.
    Profiles sought  : robotics engineers, AI operations managers, and drone pilots.

    Quantum computing

    Quantum computing will soon revolutionize businesses worldwide. Quantum computing is transforming industries such as cutting-edge medical discovery, cryptography, and logistics optimization thanks to its unprecedented ability to process colossal volumes of data at extremely high speeds. This discovery could spur innovation in various industries by solving problems previously intractable using conventional computing.

    According to Honeywell , quantum computing will help us solve a variety of problems in different industries, including machine learning, simulation, and optimization. Honeywell is looking at specific industries that will likely be impacted by quantum computing, including aerospace, chemicals, healthcare, pharmaceuticals, logistics, robotics, and finance.

    Main applications  : supply chain optimization, pharmaceutical product development, and cybersecurity improvement.
    Job seekers  : quantum software developers, quantum physics researchers, and cryptography specialists.

    Extended Reality (XR)

    By enabling the creation of immersive and interactive experiences, augmented reality (AR), virtual reality (VR), and mixed reality (MR)—collectively referred to as extended reality (XR)—are disrupting several industries, including entertainment, healthcare, and security. By 2025, these technologies will play a crucial role in improving customer interactions, optimizing remote collaboration, and changing job training . Extended reality will enable businesses to create innovative products, upskill employees through realistic simulations, and create captivating, personalized customer experiences through hardware and software developments.

    Main applications  : immersive learning environments, virtual product showrooms, and remote surgery.
    Profiles sought  : extended reality content creators, interaction designers, and virtual environment architects.

    5G and edge computing architecture

    The widespread deployment of 5G networks and the emergence of edge computing are transforming real-time data processing by bringing it closer to devices. In addition to reducing latency and improving performance, this powerful combination is accelerating advances in the Internet of Things (IoT), enabling the development of more efficient autonomous vehicles and supporting the creation of efficient smart city infrastructures. For example, in Switzerland, cities are at the forefront of technological developments  , investing in smart parking solutions, LoRaWAN communication technology, and the IoT.

    Main applications  : connected medical devices, real-time industrial automation solutions, and urban traffic management.
    Profiles sought  : edge computing developers, IoT specialists, and network architects.

    Innovations in biotechnology and health

    The healthcare industry is entering a new era. Currently, AI is enabling medical professionals to better understand their patients’ health and even save lives . Personalized medicine and preventative healthcare are becoming the norm. 

    Check out our list of the 10 must-see healthcare technology trends and discover the technologies that will likely have the biggest impact on the future of the industry.

    Main applications  : genome editing, AI-powered medical diagnostics, and digital therapeutics.
    Profiles sought  : biotechnology researchers, health data analysts, and digital health consultants.

    Evolution of blockchain technology

    Blockchain technology extends far beyond the realm of cryptocurrencies and finds other revolutionary applications such as digital identity verification, supply chain management, and secure data exchange. Thanks to its decentralized structure, blockchain ensures greater transparency, increased security, and operational efficiency. As one of the cornerstones of the future of secure digital interactions, this technology enables real-time tracking, protection of personal identities against fraud, and reliable data exchange across all industries.

    Main applications  : Decentralized Finance (DeFi), and digital identity verification.
    Profiles sought  : Blockchain developers, DeFi verifiers, and DeFi strategy specialists.

    Digital twins and simulation technology

    By creating virtual replicas of physical systems, digital twin technology offers powerful real-time simulation and monitoring capabilities. By integrating sensor data and the Internet of Things, these digital replicas facilitate the optimization of design, testing, and operational procedures. 

    By modeling patient responses to treatments, they enable the development of personalized medicine. In manufacturing, digital twins help optimize predictive maintenance and product development. They improve sustainability and infrastructure management in urban development. 

    Main applications  : smart factory simulations, urban infrastructure planning, and medical device prototyping.
    Desired profiles  : digital twin engineers, simulation technology developers, and specialists in industrial IoT applications.

    Deep tech and spatial computing

    Advances in deep tech and spatial computing are reshaping industries by merging the physical and digital worlds. Deep tech innovations, including AI, quantum computing, and advanced robotics, are enabling major advances in problem solving and complex systems. 

    Spatial computing leverages the previously mentioned AR, VR, and MR realities to create interactive and immersive environments, transforming the way we work, learn, and interact with data. According to the World Economic Forum , the ability to overlay digital elements on the physical world makes spatial computing the next big technological advancement.

    Main applications: AI-powered robotics, AR/VR training platforms, and quantum computing solutions.
    Profiles sought: deep tech engineers, spatial computing developers, and AR/VR content creators.

    Embracing the future of new technologies

    The rapid pace of technological advancements means that adaptability and continuous learning are more important than ever. To stay ahead of the curve, professionals must focus their efforts on developing skills in AI, cybersecurity, data analytics, and sustainable technology solutions.

    By embracing these trends, both individuals and businesses can thrive in this time of unprecedented innovation and opportunity.

  • Key Trends Redefining the AEC Industry in 2025

    Key Trends Redefining the AEC Industry in 2025

    For decades, the AEC industry has operated in an extremely complex, fragmented, and project-based manner. Each construction project is typically planned from scratch, developed to unique specifications, and rarely iterated.

    The value chain is local and highly fragmented, both vertically and horizontally, with numerous stakeholders involved at each stage, leading to friction and inefficiencies at interfaces. The industry has also been slow to adopt end-to-end digital tools, fostering a capital-light approach that limits innovation.

    Combined with dwindling human resources and expertise, these challenges lead to delays, cost overruns, and misalignment between design and execution.

    However, the AEC landscape as we know it is rapidly evolving in 2025. New trends and technologies are redefining how buildings are designed, constructed, and managed. Below, we explore the key trends driving this transformation—and how they’re changing the future of AEC, from a disjointed and inefficient approach to one that’s connected, digital-first, and data-driven.

    Industry Disruptors: What’s Changing in AEC?

    Sustainable development becomes a key performance indicator in design

    Sustainability is no longer an afterthought; it’s a design necessity. Governments and clients are demanding environmentally friendly buildings, requiring architects to incorporate embodied and operational carbon assessments from the outset.

    Germany, for example, has set a 68% emissions reduction target for buildings by 2030, reflecting the global trend toward greener infrastructure. As a result, design software must evolve to include real-time sustainability metrics, allowing architects to track carbon impact as a key performance indicator (KPI) throughout the design process.

    Architects and engineers wear many hats

    Regulations, compliance, and changing project demands are putting design professionals under pressure. Architects and engineers are expected to take on more responsibilities, from energy efficiency compliance to risk assessment, while managing complex design workflows.

    A 2021 RIBA survey found that 40% of large and medium-sized UK architecture firms are facing staff shortages, making efficiency more crucial than ever. To stay ahead, future software will need to improve workflows, automate compliance checks, and streamline reporting to reduce the administrative burden on already overworked professionals.

    Modular and prefabricated construction is becoming more widespread

    The shift from traditional construction to modular and prefabricated methods is accelerating. Labor shortages, rising costs, and technological advances are driving the adoption of DfMA (Design for Manufacture and Assembly), which enables faster and more efficient construction.

    With DfMA forecasting a 10% growth rate between 2024 and 2027, software must adapt to support standardized and prefabricated design modules. This means better tools for configuring modular components, ensuring seamless integration from design to construction in an increasingly industrialized workflow.

    Digital twins support the design lifecycle

    The growth of the digital twin market reflects the industry’s demand for real-time digital documentation. Digital twins are evolving from futuristic concepts to indispensable tools, providing a continuous data loop throughout a building’s lifecycle.

    From early design stages to facility management and demolition, these virtual models improve decision-making and reduce costly errors. The next evolution of software must focus on seamless model integration to ensure data flow from design to operation.

    More stakeholders, more complexity

    As projects grow in scale, stakeholder involvement increases. Infrastructure investment is increasing globally, and megaprojects will represent a growing share of GDP, increasing the need for tools to manage collaboration between diverse teams.

    Software must enable real-time data sharing, improve multi-user workflows, and streamline large-scale project management. Seamless integration between platforms is no longer a luxury; it’s a necessity to reduce friction between designers, contractors, and owners in complex construction ecosystems.

    Game-changing technological transformations in the AEC sector

    Cloud-powered workflows take center stage

    As projects become increasingly complex, it is no longer possible to process massive models locally. Cloud computing is now essential, allowing architects and engineers to work on large-scale projects from anywhere.

    Currently, many AEC professionals already access their data primarily via the cloud, a trend that will only increase. This shift means that future workflows will rely entirely on cloud processing, especially for demanding tasks such as real-time rendering, structural analysis, and large-scale collaboration.

    Immersive design is the new normal

    The demand for better, more interactive user experiences is reshaping design workflows. The global market for augmented reality in construction is expected to grow significantly over the next decade, and many clients now expect to experience their projects virtually before a single brick is laid.

    Next-generation design tools must offer seamless AR/VR integration, enabling real-time model exploration and scenario testing. With the arrival of digital natives in the industry, a seamless user interface will be essential, influencing software adoption and customer satisfaction.

    AI boosts design processes

    AI is no longer just an experiment in AEC—it’s becoming a central part of the design process. Generative AI is expected to revolutionize workflows, and the construction industry stands to benefit from substantial added value thanks to AI.

    From automating repetitive tasks to predicting project risks, AI-driven software will dramatically reduce iteration time, transforming weeks of work into hours. AI is expected to play a major role in concept development, material selection, and even real-time structural optimization.

    The Future of Construction: From Fragmentation to Integration

    The construction industry is undergoing a unique transformation. What was once a fragmented, manual, and inefficient process is becoming smarter, faster, and more connected. The trends emerging for 2025—AI-driven automation, digital twins, modular construction, and cloud-based workflows—aren’t just innovations; they’re completely reshaping the DNA of architecture, construction, and engineering.

    The days of starting from scratch for every project are over. Prefabrication and DfMA introduce repeatable and scalable solutions that reduce waste and improve efficiency. AI eliminates weeks of manual work, while digital twins bridge the gap between design and execution, ensuring projects stay on track. Cloud technology makes real-time collaboration the new norm, reducing costly communication errors. The AEC industry is no longer just about building buildings, but building smarter.

  • Robert Sapolsky “It’s All Settlement”: Does Free Will Exist?

    Robert Sapolsky “It’s All Settlement”: Does Free Will Exist?

    What controls our lives? Our own choice or fate? We figure out what answer modern science gives to this with Robert Sapolsky’s new book “It’s Decided: Life Without Free Will”
    Renowned neurobiologist and Stanford University professor Robert Sapolsky is one of the most authoritative authors and popularizers of science. For over 30 years, the scientist worked on research into the behavior of baboons in Kenya, where he studied the mechanisms of stress in primates. Based on this material, Sapolsky wrote several books that became world bestsellers, in particular, “The Biology of Good and Evil,” which received the Enlightenment Prize in 2020. In his new book, Robert Sapolsky develops his ideas in a radical way; he considers free will to be an illusion, and our decisions to be predetermined by biological factors and the environment.

    As any reader who has been, is, or will be a teenager no doubt knows, it is a very difficult stage of life. Emotional turbulence, reckless risk-taking and thrill-seeking, peak time for extremes of both pro- and antisocial behavior, the desire to stand out while at the same time fitting in; behaviorally, adolescence is a monstrosity in itself.

    Neuroscience, too. Much of the research on adolescence is about why teenagers act like teenagers; our goal is to understand how the adolescent brain helps explain the intent to push buttons in adulthood. Conveniently, the same fascinating area of ​​neuroscience applies to both. By the time adolescence begins, the brain is pretty close to its mature version, with adult neuron and synapse density; the process of myelination is already complete. There is one area of ​​the brain that, surprisingly, will take another decade to mature. What is that area? The frontal cortex, of course. It “matures” much more slowly than the rest of the cortex—a phenomenon that is common in all mammals, but especially noticeable in primates.

    Some reasons for this delay are easy to explain. Myelination of the brain, for example, begins in fetal life and gradually increases until it reaches adult levels; the frontal cortex is no exception, it’s just that the process is significantly delayed. But when it comes to neurons and synapses, the picture is completely different. When a child is just entering adolescence, his frontal cortex has more synapses than an adult. In adolescence and early adulthood, the frontal cortex prunes synapses that are unnecessary, unimportant, or simply incorrect, and the cortex itself gradually becomes more compact and toned.

    As a clear example, a 13-year-old and a 20-year-old can perform equally well on a test assessing the functioning of the frontal cortex, but the former will have to use more of it to pass the test successfully.

    So the frontal cortex—responsible for executive function, long-term planning, delay of gratification, impulse control, and emotion regulation—is not fully functional in teenagers. What do you think that explains? Pretty much everything that happens in adolescence, especially if you add to the explanation the tsunami of estrogen, progesterone, and testosterone that floods the brain. The relentless force of drives and urges, all held in check by the flimsy brakes of an immature frontal cortex.

    We don’t care about the delay in maturation of the frontal cortex because it causes teenagers to get silly tattoos; we care that adolescence and early adulthood are the years of a massive construction project in the most interesting part of the brain. The implications are clear. If you’re an adult, your adolescent experiences of trauma, excitement, love, failure, rejection, happiness, despair, acne—all of these played a disproportionately large role in shaping the frontal cortex that now helps you think about those two buttons. There’s no doubt that the vastly varied experiences of adolescence shape the vastly varied frontal cortexes of adults.

    One particularly fascinating consequence of this slow maturation is important to remember when we get to the section on genes. By definition, if the frontal cortex is the last part of the brain to mature, then it is the part of the brain that is shaped minimally by genes and maximally by the environment. This raises the question of why the frontal cortex matures so slowly. Is the blueprint itself more complex than the rest of the cortex? Does it require specialized neurons, neurotransmitters that are difficult to synthesize, unique synapses so bizarre that they require extensive assembly instructions? No, there is almost nothing unique about the frontal cortex.

    Thus, if we take into account only the complexity of the frontal lobes, it cannot be said that delayed maturation is inevitable, and that the frontal cortex would mature faster if only it could.

    No, this delay has actively evolved, has undergone selection. It is this area of ​​the brain that primarily decides how to act correctly when it is most difficult to do so, but no genes can determine what is considered the right thing to do. This must be learned long and hard, from personal experience. The same is true for any primate forced to navigate the intricacies of social relations: who to be rude to, who to bow to, who to be friends with, who to stab in the back.

    If this is important to baboons, what can we say about humans? We are forced to internalize the rationalizations and hypocrisies of our culture – thou shalt not kill, unless it is one of THEM, in which case here is a medal for you. Don’t lie, unless the lie promises a huge profit, or unless it is for a genuinely good cause: “No, sir, there are no refugees hiding in my attic, of course!” Laws to be obeyed without fail, laws to ignore, laws to resist. Live as if each day were your last, and at the same time as if it were the first day of the rest of your life. And so on and so forth. Just think: other primates finish developing their frontal cortex at puberty, but it takes us another decade. There is something remarkable here – the genetic programming of the human brain has evolved to free the frontal cortex as much as possible from the influence of genes. We’ll learn more about the frontal cortex in the next chapter.

    So, adolescence is the final phase of formation of the frontal cortex of the brain, and this process is strongly influenced by the environment and experience. Moving further back into childhood, we find there a large-scale construction of all areas of the brain, a process of gradual increase in the complexity of neural networks and myelination.

    Naturally, behavior also becomes more complex. Logical thinking, cognitive abilities, and emotions develop, which are necessary for making moral decisions (enabling, for example, the transition from obeying laws in order to avoid punishment to obeying laws because what would happen to society if people did not obey them?). Empathy develops (the ability to sympathize with emotional pain, not just physical pain, abstract pain, pain you have never experienced yourself, the suffering of people who are not at all like you). Impulse control strengthens (which at first helps you not to eat a marshmallow but to wait a few minutes for someone to give you two marshmallows, and which later helps you focus on your 80-year project of choosing a nursing home to your liking).

    In other words, the hard stuff comes after the easy stuff. Child development researchers typically divide these maturational trajectories into “stages” (for example, the canonical stages of moral development identified by Harvard psychologist Lorenz Kohlberg). As you might expect, there is enormous variation in exactly where children of the same age may be in their maturational process, how quickly they move from one stage to the next, and which stage carries over into adulthood.

    Back on topic, you now have to ask where individual differences in maturation come from, how much of it is controllable, and how it helps you become the button-gazing you. What factors influence maturation? Here’s a list of the most common suspects, with brief descriptions:

    Parenting style , of course . Differences in parenting have been the focus of a very important paper by UC Berkeley psychologist Diana Baumrind. There is the authoritative parenting style, which places high demands and expectations on the child but is flexible in responding to the child’s needs; this style usually inspires neurotic, middle-class parents. Then there is the authoritarian parenting style (high demands, low responsiveness: “Because I said so!”), the permissive style (low demands, high responsiveness), and the indifferent style (low demands, low responsiveness). And each of these produces different types of adults. As we will see in the next chapter, the socioeconomic status of the parents also matters greatly; for example, low socioeconomic status predicts delays in the maturation of the frontal cortex already in preschoolers.

    Peer group socialization , where different members of the same age set different behavioral examples with varying degrees of attractiveness. Developmental psychologists often underestimate the importance of peers, but primatologists know it well. Humans have invented a new way of transmitting knowledge between generations, where an adult professional, such as a teacher, specifically teaches young people. In contrast, primates typically learn by watching their elders.

    Environmental influences : Is the park safe in your neighborhood? Are there more bookstores or liquor stores in the area? Is it easy to find healthy food? What is the crime rate? Nothing unexpected.

    Cultural beliefs and values ​​that influence all of the above. As we will see, culture has a significant impact on parenting style, the behavior modeled by peers, and the kinds of communities that form within it. Cultural diversity in hidden and overt rites of passage, the kinds of religious communities, and the kinds of behavior that children are encouraged to do, such as earning badges for academic achievement or bullying outcasts, all play important roles.

  • Connectome: How Scientists Map the Brain

    Connectome: How Scientists Map the Brain

    The human brain consists of billions of neurons connected to each other. To understand their work, scientists create a map of such connections – a connectome. We will tell you why it is important to study it and what projects exist in this area

    Connectome and connectomics: what is it

    The brain is a complex organ. It consists of neurons that are connected to each other. At the points of their contact ( synapses ) there are small gaps. The synapses themselves are structures that can be seen with powerful electron microscopes.

    When one neuron wants to “pass a message” to another, it releases special chemicals called neurotransmitters. They cross the synaptic gap and affect the neighboring cell, causing an electrical signal in it. This is how neurons exchange information: in this way, they influence thinking, emotions, and behavior.

    For centuries, researchers have studied the structure of the brain, its cellular composition and biochemical properties. But the organization of connections between different areas remained a mystery until recently. The development of computer technology, artificial intelligence (AI) and big data analysis methods made it possible to solve this problem. A new field of knowledge emerged: connectomics.

    The science of brain mapping (the process of creating detailed maps of brain regions) addresses how individual neurons and their associations interact with each other to influence thoughts and feelings.

    The main focus of this science is the connectome . This is a map of neural pathways and connections in the brain. The connectome can be compared to the wiring diagram in a technical device, only much more complicated.

    The term was proposed in 2005 by scientists Olaf Sporns, Rolf Ketter, Giulio Tononi and Patrick Hagmann. The concept was formed by analogy with the genome. If the genome is a complete set of genes, then the connectome is a complete map of neural connections. Scientists have found that it is unique for each person. The connectome differs even in two genetically identical people – identical twins.
    A detailed map of all the connections in the human brain has not yet been compiled. This is due to the fact that this organ is very complex. It consists of approximately 86 billion neurons that form 100 trillion connections with each other. To map them, it is necessary to analyze a huge amount of data. Currently, scientists study the brain by compiling connectomes of individual areas of the cerebral cortex.

    Neuroscientist Sebastian Seung, a professor at Princeton University’s Institute for Neuroscience, also noted that the map of connections changes over time. New connections are created, others are destroyed. These changes are influenced by neural activity. This, in turn, depends on mental experience, perception, and cognition. Simply put, a person’s experience can change their connectome.

    The Key to Treating Diseases and the Mysteries of the Mind
    Scientists analyze neural connections to better understand how the brain is structured and works. This is important for several reasons.

    Treatment of diseases

    In the early stages of diseases, the connection map allows you to notice violations before serious symptoms appear and begin treatment faster.

    In the future, with the development of connectomics, doctors will be able to better select medications, as well as predict how a patient’s body will respond to certain treatments. Brain mapping will most likely complement the diagnostic methods used.

    Study and improve brain functions

    With the help of a connection map, experts study how intelligence is structured, which networks are responsible for different talents and personal qualities.

    Connectome research also offers new insights into human aging. By mapping how neural connections change over time, scientists gain insight into critical periods in development. This knowledge could help create new methods for maintaining cognitive health (the ability to think clearly, learn, and remember).

    Development of artificial intelligence (AI)

    In the field of machine learning, the connectome is inspiring scientists to design artificial neural networks. By mimicking the structure and function of the brain, researchers may be able to create even more powerful AI systems.

    Brain mapping methods

    Connectomics brings together neuroscientists, biologists, and big data engineers. The researchers use a variety of methods to study connections.

    Functional magnetic resonance imaging (fMRI) helps to see which areas of the brain work together when the organ performs a task – thinking, remembering, or simply resting.
    Tractography. This is a method of visualizing nerve fibers. It is performed using a magnetic resonance imaging (MRI) scanner. It shows how water molecules move along nerve fibers – white matter

    tracts, and thus allows us to assess their integrity.

    • Microscopy: Using powerful microscopes, scientists can examine tiny details, such as the work of neurons inside the brains of animals during experiments.
    • Machine learning methods. Algorithms help process large volumes of data on neural connections and find patterns in them.
    • 3D modeling. To better understand how the brain works, scientists create special models. In them, neurons are depicted as dots (nodes), and their connections are depicted as lines.
    • Such models allow us to see how neural networks are structured and work, and also show how the brain structure changes over time.

    Connectome research projects

    Early works

    Scientists took their first steps in studying brain connections in the second half of the 20th century. In 1986, a group led by British biologist and head of the Cambridge University Molecular Biology Laboratory Sydney Brenner compiled the first complete map of the nervous system of the small worm C. elegans. It has 302 neurons, so it was technically possible to map all the connections.

    The C. elegans map showed the pathway a signal takes within the organism to trigger a particular behavior, such as a response to egg laying or touch.

    In 2008, a team of researchers led by Professor Patrick Hagmann, director of the Connectomics Laboratory at the University of Lausanne, visualized the connections between areas of the human cerebral cortex for the first time.

    A map of connections between areas of the cerebral cortex (macroconnectome) obtained using MRI in humans (Photo: theplosblog.plos.org)
    They noticed that the map of such connections differed in different people. This confirmed the hypothesis that each person’s brain is unique not only in the volume of gray and white matter, the thickness of the cortex, but also in the structure of connections between neurons.

    The Human Connectome Project

    Further conclusions required data from a large number of people. For this purpose, American scientists and the US Department of Health and Human Services launched the Human Connectome Project (HCP) in 2009 .

    Researchers from different universities around the world collected data from thousands of volunteers using powerful MRI scanners. The result was an atlas that was unique in its detail and content. In it, scientists identified different areas of the brain based on their functions and connections with other areas. In each hemisphere, they counted 180 different segments, many of which were previously unknown.

    Each of these areas can be thought of as a small department within a large company. For example, there is a department that analyzes visual information, next to it is one that is responsible for movement, another one is involved in planning, and so on. Different areas solve problems together.


    As the HCP’s main phase concluded in 2021, new studies began to build on it. Some projects focused on studying the brains of depressed adolescents, while others focused on analyzing connectome changes in older adults. The research is ongoing, and its results may help develop new diagnostic and treatment methods for various diseases in the future.

    Google Projects

    In the last decade, Google has joined the study of neural connections. Together with leading universities and scientists, its specialists conduct large-scale research.

    For example, in 2021, Google, in collaboration with Harvard University, completed a unique project called H01. Scientists processed more than 1.4 PB of data to create a highly detailed 3D map of a small fragment of the human brain. The researchers cut the organ tissue into thousands of ultra-thin sections and illuminated them with powerful electron microscopes. This made it possible to examine not only cells, but even individual contacts between them — synapses.

    In 2024, Google and the Institute of Science and Technology Austria (ISTA) introduced a new way to study the connections between neurons using a light microscope. To see the smallest details, the scientists increased the size of the tissue using a special gel.

    To do this, the specialists used brain fragments that were anonymously donated by patients during neurosurgical operations (for example, when treating epilepsy, a small section of the healthy cortex is sometimes removed to gain access to its deeper layers). Then the scientists soaked the brain tissue in hydrogel, which caused it to increase in size several times. This made it possible to examine individual connections between neurons using a regular light microscope. After that, using special dyes, they “illuminated” proteins, synapses, and other molecules in different colors.

    This method helps to see how neurons connect to each other, as well as to distinguish between their types and functions. This allows for the rapid creation of detailed color maps of the brain, which immediately show both the connections and the features of the cells.

    MICrONS Project

    In April 2025, a team of researchers from the Allen Institute, Baylor College of Medicine, Princeton (USA) and other institutions presented the largest and most detailed map of the connections of the mammalian brain. It shows not only the structure, but also the work of neurons.

    To map it, the scientists studied 1 mm3 of the mouse visual cortex and reconstructed more than 200,000 cells. Of these, about 82,000 were neurons. In this tiny area, they found about 500 million synapses.

    At the level of individual types of neurons and their connections, the brains of mice and humans are similar . Therefore, scientists believe that analyzing the animal’s connectome will help to understand how failures in the functioning of neural circuits occur in Alzheimer’s disease or multiple sclerosis in humans. This may allow us to find new approaches to treatment.

    EBRAINS infrastructure

    The famous European brain mapping project Human Brain Project (HBP) started in 2013 and will be completed in 2023. Its goal was to develop new tools and technologies for studying the human brain.

    As part of the project, the digital research platform EBRAINS (European Brain Research Infrastructures) was created in 2019. It includes databases on the structure and functions of the brain, modeling tools, and digital maps of the organ. EBRAINS allows neural connections to be modeled and helps specialists from different countries exchange scientific data

    China’s Human Connectome Project

    The Chinese Human Connectome Project (CHCP) is a scientific study launched in 2017. Within its framework, specialists analyze the connectome of East Asians, primarily the Chinese.

    The main goal of the CHCP is to find out whether the brain is affected by environmental and lifestyle factors. Scientists have already presented some of the findings. Comparison of the CHCP data and the Human Connectome Project showed that the brains of people from different cultures – Chinese and Western – are generally similar. They have common features in structure, function, and connections. But there are also differences – in areas related to language and complex thought processes. These results will help to understand how culture and environment determine human behavior in the future.