Author: michaelis

  • Dangers of AI pentesting in Mauritius

    Don’t fall for the trap

    It is true that with AI agents and its accessibility, we have equipped the malicious actor with slightly more automating power in its arsenal.

    However, like fuzzers, they only increase the speed at which they find vulnerabilities.

    AI agents will make it easier to vulnerability scan multiple endpoints very fast. However, human beings still need to think, interpret and correlate data to establish vulnerability chains that lead to exploitation.

    At Michaelis Labs, we have tried two tools being marketed and sold as a great end-to-end pentesting solution on Hackthebox Active Directory Windows boxes. (ADScan & Penligent)

    AI agents does speed up some part of reconnaisance and vulnerability scanning but they fail at some pivots, they often can’t exploit domain trust and they can’t use a different enumeration technique if one fails via a protocol.

    More importantly, Deepseek & ChatGPT still can’t understand documentation and incorrectly provide commands without the correct flags to undertake enumeration.

    In conclusion, relying solely on its output is a huge mistake and money thrown in the trash.

    There are other ways besides an AI penetration test you can use to secure your internal environment with proven effectiveness and less risk to your data.

    Buying a fully autonomous, continuous, AI agent to penetration test your environment is a flashy gadget being marketed everywhere with no real return on investment.

    The narrative that we need AI to secure our network, to find and exploit vulnerabilities in our network is being encouraged, using fear, to get companies to fall for a fake piece of insurance.

    It’s a dangerous and misleading trend that has now reached the Mauritius market.

    Funding is being raised right now in the US and UK to market AI pentesting solutions aggressively.

    New entrepreneurs with no background in cybersecurity have hit the Mauritian market selling AI pentesting tools.

    Ask yourself what will happen to your network and data when unknown AI agents owned by unknown persons have access to it 24/7?

  • Out-of-band web vulnerabilities server setup

    Setting up a private Burp Collaborator Server

    To protect clients, setting up a private Collaborator Server is an important step before any out of band web security testing.

    Considering the number of hours I spent researching the correct DNS settings, I have written a short guide to make life easier…

    I created a Digital Ocean Droplet.

    By default, all ports are open, but you can easily assign a Firewall on Digital Ocean with the ports you need open to restrict access to your server.

    You can find it in networking tab for your droplet

    Commands to run inside your droplet. SSH into your droplet.

    apt-get update
    apt-get install default-jre
    mkdir -p /usr/local/collaborator/
    

    Download the burp installation file

    curl '<https://portswigger-cdn.net/burp/releases/download?product=pro&version=2024.5.5&type=Jar>'   -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7'   -H 'accept-language: en-US,en;q=0.9'   -H 'priority: u=0, i'   -H 'referer: <https://portswigger.net/>'   -H 'sec-ch-ua: "Not/A)Brand";v="8", "Chromium";v="126", "Google Chrome";v="126"'   -H 'sec-ch-ua-mobile: ?0'   -H 'sec-ch-ua-platform: "Windows"'   -H 'sec-fetch-dest: document'   -H 'sec-fetch-mode: navigate'   -H 'sec-fetch-site: cross-site'   -H 'sec-fetch-user: ?1'   -H 'upgrade-insecure-requests: 1'   -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36' -o burp.jar
    

    Create a configuration file (nano /usr/local/collaborator/collaborator.config) according to the below.

    {
      "serverDomain" : "SUBDOMAIN.DOMAIN.com",
      "workerThreads" : 10,
      "eventCapture": {
          "localAddress" : [ "PUBLIC IP OF DROPLET" ],
          "publicAddress" : "PUBLIC IP OF DROPLET",
          "http": {
             "ports" : 80
           },
          "https": {
              "ports" : 443
          },
          "smtp": {
              "ports" : [25, 587]
          },
          "smtps": {
              "ports" : 465
          },
          "ssl": {
              "certificateFiles" : [
                  "/usr/local/collaborator/keys/privkey.pem",
                  "/usr/local/collaborator/keys/cert.pem",
                  "/usr/local/collaborator/keys/fullchain.pem" ]
          }
      },
      "polling" : {
          "localAddress" :  "PUBLIC IP OF DROPLET",
          "publicAddress" :  "PUBLIC IP OF DROPLET",
          "http": {
              "port" : 39090
          },
          "https": {
              "port" : 39443
          },
          "ssl": {
              "certificateFiles" : [
                  "/usr/local/collaborator/keys/privkey.pem",
                  "/usr/local/collaborator/keys/cert.pem",
                  "/usr/local/collaborator/keys/fullchain.pem" ]
    
          }
      },
      "metrics": {
          "path" : "RAMDOM 16 CHAR STRING",
          "addressWhitelist" : ["0.0.0.0/1"]
      },
      "dns": {
          "interfaces" : [{
              "name":"ns1.SUBDOMAIN.DOMAIN.com",
              "localAddress":"PUBLIC IP OF DROPLET",
              "publicAddress":"PUBLIC IP OF DROPLET"
          }],
          "ports" : 53
       },
       "logLevel" : "INFO"
    }
    
    

    The addressWhitelist will restrict access to metrics to your IP if you so desire.

    Create a configure_certs.sh file:

    nano /usr/local/collaborator/configure_certs.sh
    
    CERTBOT_DOMAIN=$1
    if [ -z $1 ];
    then
        echo "Missing mandatory argument. "
        echo " - Usage: $0  <domain> "
        exit 1
    fi
    CERT_PATH=/etc/letsencrypt/live/$CERTBOT_DOMAIN/
    mkdir -p /usr/local/collaborator/keys/
    
    if [[ -f $CERT_PATH/privkey.pem && -f $CERT_PATH/fullchain.pem && -f $CERT_PATH/cert.pem ]]; then
            cp $CERT_PATH/privkey.pem /usr/local/collaborator/keys/
            cp $CERT_PATH/fullchain.pem /usr/local/collaborator/keys/
            cp $CERT_PATH/cert.pem /usr/local/collaborator/keys/
            chown -R collaborator /usr/local/collaborator/keys
            echo "Certificates installed successfully"
    else
            echo "Unable to find certificates in $CERT_PATH"
    Cfi
    

    Create the SSL certificates, run:

    ./certbot-auto certonly -d SUBDOMAIN.domain.com -d *.SUBDOMAIN.subdomain.com  --server <https://acme-v02.api.letsencrypt.org/directory> --manual --agree-tos --no-eff-email --manual-public-ip-logging-ok --preferred-challenges dns-01
    

    Insert your e-mail during certificate generation.

    You will get a message on how to deploy a DNS TXT record. Press Enter to get a second message.

    Go to GoDaddy and add 2 DNS TXT records with the _acme-challenges.SUBDOMAIN and the message.

    txt records

    Wait 10–15minutes.

    Constantly check with dig to see if your text records are found.

    Then press Enter for validation and the certificates are generated.

    Great. You now have certificates. To copy them to your working directory:

    chmod +x /usr/local/collaborator/configure_certs.sh && /usr/local/collaborator/configure_certs.sh SUBDOMAIN.domain.com
    

    Check if your collaborater server runs correctly:

    bash -c  "java -Xms10m -Xmx200m -XX:GCTimeRatio=19 -jar /usr/local/collaborator/burpsuite_pro_1.7.33.jar --collaborator-server --collaborator-config=/usr/local/collaborator/collaborator.config"
    2018-04-08 19:46:36.082 : Using configuration file /usr/local/collaborator/collaborator.config
    2018-04-08 19:46:37.473 : Listening for DNS on 54.38.**.**:3353
    2018-04-08 19:46:37.486 : Listening for HTTP on 54.38.**.**:39090
    2018-04-08 19:46:37.486 : Listening for SMTP on 54.38.**.**:3325
    2018-04-08 19:46:37.487 : Listening for HTTP on 54.38.**.**:3380
    2018-04-08 19:46:37.486 : Listening for SMTP on 54.38.**.**:33587
    2018-04-08 19:46:37.600 : Listening for SMTPS on 54.38.**.**:33465
    2018-04-08 19:46:37.600 : Listening for HTTPS on 54.38.**.**:39443
    2018-04-08 19:46:37.602 : Listening for HTTPS on 54.38.**.**:33443This guide will show you what DNS records to put on GoDaddy. I have tried Namecheap but with no success.
    

    Ctrl + C.

    Create the DNS records, here is what was used on GoDaddy:

    +-----------+---------------------------+----------------------------+
    |   type    |       name                |     data                   |
    +-----------+--------------------+-----------------------------------+
    | A         | subdomain                 | DROPLET IP                 |
    | A         | ns1.subdomain             | DROPLET IP                 |
    | NS        | subdomain                 | ns1.subdomain.domain.com   |
    | TXT       | _acme-challenge.subdomain | XXXXXXXXXXXXXXXXXXXX       |
    | TXT       | _acme-challenge.subdomain | XXXXXXXXXXXXXXXXXXXX       |
    +-----------+--------------------+-----------------------------------+
    

    Once done, you can re-run the Collaborator but as a service for persistence.

    Create a file called collaborator.service

    sudo nano /etc/systemd/system/collaborator.service
    

    Copy the configuration below:

    [Unit]
    Description=Burp Collaborator Server Daemon
    After=network.target
    
    [Service]
    Type=simple
    User=collaborator
    UMask=007
    ExecStart=/usr/bin/java -Xms10m -Xmx200m -XX:GCTimeRatio=19 -jar /usr/local/collaborator/burpsuite_pro_1.7.33.jar --collaborator-server --collaborator-config=/usr/local/collaborator/collaborator.config
    Restart=on-failure
    
    # Configures the time to wait before service is stopped forcefully.
    TimeoutStopSec=300
    
    [Install]
    WantedBy=multi-user.target
    

    Enable the service:

    systemctl enable collaborator
    

    Finally, start the service:

    systemctl start collaborator
    

    Also, note if you mess up the config file, do stop, disable, and re-enable the service for the new config file to take effect.

    Once the DNS records are UP, and the service is running:

    systemctl status collaborator
    

    Check with the DIG requests that the nameserver resolution is working with these:

    dig subdomain.domain.com NS @8.8.8.8
    dig ns1.subdomain.domain.com A @8.8.8.8 +trace
    

    If you don’t see your DROPLET IP responding to the last dig, then the DNS collaborator service might be incorrectly setup, check your collaborator.config and disable/enable/start the collaborator service.

    Once it responds, go to Burp > Settings > User or Burp > Settings > Project and modify the Collaborator section to Use a private collaborator server, entering your subdomain.domain.com you have chosen.

    Run the health check and say Hurray!

  • How Companies in Mauritius Can Prevent Data Breaches

    Data breaches are no longer a problem limited to large enterprises overseas, as organizations in Mauritius are increasingly being targeted by opportunistic attackers, automated scanning tools, and financially motivated threat actors. Whether you run a financial service, an SME, or a tech startup, your attack surface becomes visible the moment your systems are exposed to the internet. Preventing breaches requires a combination of technical controls, operational discipline, and continuous validation. Below are key points to consider in order to protect your own data and that of your clients.

    To begin, you must understand your attack surface, which most companies underestimate. Typical exposures include unsecured web applications, forgotten subdomains, open ports, misconfigured services, and leaked credentials found in public repositories. The recommended action is to maintain an up‑to‑date asset inventory and continuously scan for exposed services.

    Next, secure your web applications and APIs, as these are primary entry points for attackers. Common vulnerabilities include broken access control, injection flaws such as SQL or command injection, authentication weaknesses, and insecure APIs that expose sensitive data. You should conduct regular web application and API penetration testing aligned with the OWASP Top 10 risks.

    Implementing strong access control is also essential because weak identity and access management is a leading cause of breaches.

    Key controls include enforcing multi‑factor authentication (MFA), applying the principle of least privilege, regularly reviewing user permissions, and disabling unused accounts.

    You should audit Active Directory, cloud IAM roles, and internal systems.

    Consistently patching and updating systems is another critical step, as unpatched vulnerabilities are widely exploited, often within days of disclosure.

    Maintain a patch management schedule, prioritize critical vulnerabilities with a CVSS score of 7 or higher, and monitor vendor advisories.

    Because prevention alone is not enough, early detection of suspicious activity is vital. Implement centralized logging via a SIEM solution, endpoint detection and response (EDR), and alerts for unusual login patterns or privilege escalation.

    Adopt an assume‑breach mindset to secure internal networks, since attackers often gain initial access and then move laterally.

    Segment your networks to separate user, server, and critical systems, and perform internal penetration testing using assumed breach scenarios.

    Protecting email and training employees is equally important, as phishing remains one of the most effective attack vectors.

    Enforce MFA, deploy email filtering and anti‑phishing tools, conduct employee awareness training, and simulate phishing campaigns.

    Ransomware attacks are rising globally and affect smaller markets as well, so a robust backup and recovery strategy is essential.

    Maintain offline backups, test restoration procedures regularly, and ensure backups are isolated from production systems.

    Regular penetration testing is also necessary because security tools alone cannot replicate real attackers.

    A structured penetration test will identify exploitable weaknesses, validate your defenses, and provide remediation guidance.

    Recommended scope includes external network testing, internal assumed breach testing, and web and API security testing.

    Depending on your industry, you may need to align with data protection regulations, financial security requirements, or international standards. Frameworks like ISO/IEC 27001 offer structured guidance for managing information security risks.

    Cybersecurity is not a one‑time project; it is a continuous process of assessment, remediation, and validation.

    For companies in Mauritius, the opportunity is actually an advantage.

    The threat landscape is growing, but competition in cybersecurity maturity is still relatively low.

    Organizations that invest early in security will significantly reduce risk and build trust with clients and partners.

    If you need help securing your business, Michaelis Labs assists organizations in Mauritius by identifying and eliminating security weaknesses through internal and external penetration testing, web application and API security assessments, and continuous attack surface monitoring.

  • The State of Web Application Security in 2026

    Web application security in 2026 is not defined by a lack of tools, frameworks, or guidance. It’s defined by a widening gap between what organizations believe is secure and what is actually exploitable in practice.

    Most teams have adopted modern stacks, CI/CD pipelines, automated scanners, and even periodic pentesting. Yet breaches and critical vulnerabilities remain routine. The issue is misplaced confidence and shallow execution.

    1. The Illusion of “Secure by Default”

    Frameworks have improved. Cloud providers have hardened their platforms. Security tooling is more accessible than ever.

    But “secure by default” has quietly become “assumed secure.”

    In reality modern frameworks reduce common mistakes, not logic flaws.

    Cloud security shifts responsibility but doesn’t eliminate it.

    Automated tools detect patterns but not intent.

    Developers are shipping faster with AI-assisted code generation, but that code often inherits insecure assumptions; often missing authorization checks in edge cases, exposing internal APIs and trusting client-side enforcement.

    The result is a cleaner codebase with fewer obvious bugs, and more subtle, high-impact vulnerabilities.

    2. The Real Attack Surface Has Moved

    If your security model is still centered on classic input validation issues, you’re behind.

    Attackers in 2026 focus on application logic and integration layers, not just injection flaws.

    Key areas under active exploitation:

    Authentication & Session Flows

    • OAuth misconfigurations
    • Token leakage across services
    • Weak session invalidation logic

    APIs Everywhere

    • Undocumented endpoints
    • Excessive data exposure
    • Broken object-level authorization (BOLA)

    Business Logic Abuse

    • Price manipulation
    • Workflow bypass (e.g., skipping verification steps)
    • Abuse of “intended” features in unintended sequences

    Client-Side Attack Vectors

    • DOM-based injection paths
    • Abuse of browser storage mechanisms

    The modern web app is no longer a monolith, it’s a distributed system. That system is only as secure as its weakest integration.

    3. Where Organizations Still Fail

    Despite better tools, the same structural problems persist:

    Security as a Checkbox
    Pentests are treated as compliance artifacts rather than adversarial simulations. Reports are filed, not operationalized.

    Overreliance on Automation
    Scanners are excellent at finding known classes of bugs. They are ineffective at identifying multi-step attack chains, ontext-dependent vulnerabilities and business logic flaws.

    No Threat Modeling
    Features are built without asking: how could this be abused?
    As a result, vulnerabilities are designed in & not introduced later.

    Misplaced Trust in Technology Choices
    Using modern frameworks or cloud platforms does not eliminate risk. It changes its shape.

    Weak Security Culture
    Security is still externalized:

    “The pentesters will catch it”

    “The WAF will block it”

    Neither assumption holds under a motivated attacker.

    4. What Actually Works in 2026

    Security maturity is no longer about tooling but about mindset and execution.

    Think in Attack Paths, Not Vulnerabilities
    A single low-severity issue rarely matters. Chains do.
    Ask: What can this become when combined with other weaknesses?

    Embed Adversarial Thinking Early
    Before shipping a feature:

    • What assumptions does this rely on?
    • What happens if those assumptions fail?
    • Can a user control more than intended?

    Prioritize Authorization Over Validation
    Most critical issues today are not about malformed input, they’re about valid input used in the wrong context.

    Test Like an Attacker, Not a Scanner
    Manual testing remains irreplaceable for:

    • Logic flaws
    • State manipulation
    • Abuse scenarios

    Instrument for Detection, Not Just Prevention
    You will not catch everything pre-production.
    Logging and monitoring should answer:

    • Who accessed what, and why?
    • What patterns deviate from normal behavior?

    5. Our Perspective at Michaelis Labs

    At Michaelis Labs, we operate under a simple assumption:

    If it can’t be realistically exploited, it doesn’t matter. If it can, it matters immediately.

    This translates into a few core principles:

    • Depth over volume in testing
    • Realistic attack scenarios over theoretical findings
    • Focus on impact, not just enumeration

    Security is not about producing longer reports. It’s about uncovering the paths that attackers would actually take and closing them effectively.

    Web application security in 2026 is not failing due to lack of knowledge. It’s failing due to misapplied confidence and incomplete thinking.

    The organizations that improve are not the ones with the most tools.
    They’re the ones that question assumptions, model real threats and test like adversaries.

    Everything else is noise.