First off, I don’t know if you’ve been avoiding the political storm as much as I have but there’s one thing that’s been so retweeted, shared, and updated that I couldn’t avoid it. The discussion about the privacy of our internet content.
ISPs are able to sell your data. While it is possible that similar data is already being collected and used by social media, applications, and other providers… It’s brought up an interesting conversation about how to secure ourselves while browsing the internet.
Using TLS to encrypting the communication between a client and server is a good way to secure the content between you and a website. However what about your destination, ip address, and other information that’s required to connect to that server? Virtual private networks (VPNs) have been used by corporations and security focused individuals for years. Lately VPNs are the center of attention because they offer a way to encrypt information about your host and prevent your real location from being collected. To learn more about what a VPN is, check out https://www.bestvpn.com/blog/38176/vpns-beginners-need-know/.
I’ve been toying with the idea of using a VPN for a while now. Going to security conventions and using the hotel’s public wifi has never let me sleep well at night. A VPN would minimize that issue. I’ve considered a few paid services but ultimately decided to go for the “create your own for cheap” route. The infosec community has been buzzing about Algo. Algo VPN “is a set of Ansible scripts that simplify the setup of a personal IPSEC VPN. It uses the most secure defaults available, works with common cloud providers, and does not require client software on most devices. See our release announcement for more information.”
This blog is hosted on a DigitalOcean droplet. I’m familiar with how droplets work and when I heard that Algo can create a droplet and use it as a VPN provider, I jumped at the opportunity.
How I setup my Algo VPN
Following the README.md of the Algo github repo is very straightforward. The idea is to clone the repo to your local computer. After installing the dependencies and setting up the config file for the number of users to expect, Algo takes care of all the heavy lifting by using DigitalOcean’s APIs to create the droplet and setup the VPN.
I cloned the repo onto my mac, installed the python dependencies and only had one hiccup. On a mac, you need to have Xcode installed and agree to the license. All of the files required to setup the VPN clients are saved to the config folder after running the script. To connect my mac, all I had to do was double click the <username>.mobileconfig file and everything was fully setup.
I’ll have to update this post as I setup my other devices. Windows computers and Android phones are on my to do list.
To test if the VPN is working, visit whoer.net. Check to see if the host connecting to the site is the droplet IP or your computer’s IP. The caveat of using such a VPN is that it’s not fully anonymous. Website hosts can know your connection is coming from a DigitalOcean droplet because who owns the IP range is publicly available. Similar to the risk of someone watching a Tor node, well known VPN providers can also be monitored. It is only a matter of time before the usage of that droplet is mapped out.
That’s not all folks
VPNs are only one part of a secure digital life. Using HTTPS when connecting to websites, resetting passwords every few months, and enabling two factor authentication is also important. As far as “providers selling our data”… The best way to prevent that is to choose providers with a stronger commitment to their users than those who care more about improving their profits.
Over the last two weeks, I’ve posting a lot about HTTP Public Key Pinning. This will be my last post about it, I want to focus on testing HPKP. If you don’t know what HPKP is, read the first post. To learn how to add those headers, read the second post.
I’ve had to spend a lot of time trying to figure out how to properly test these headers. In theory, this is how it should work. The first time your browser loads a website with HPKP, it saves the pin in it’s memory and then compares the pin on every request after that until the pin expires. If the browser ever finds a request without a pin or a incorrect pin, then the browser will block the request and warn the user.
Failed attempts at testing HPKP
In order to test this, you should just have to change the pin. Easy right? Well so far the execution of that has been tricky. When I tried to change the pin to a random number that doesn’t match the cert, the browser seems to ignore the pin (According to my own results) and doesn’t send an error.
The next idea I attempted was to use OWASP ZAP to man-in-the-middle (MITM) the request, which would change the certificate and the pin could be modified in all the requests. This also didn’t work because HPKP is ignored when a client trusted certificate is used instead of one approved by a CA. “Firefox (and Chrome) disable Pin Validation for Pinned Hosts whose validated certificate chain terminates at a user-defined trust anchor (rather than a built-in trust anchor).” –Mozilla.
Better ways for testing HPKP
What did seem to work was setting Chrome’s expected value to a false pin, and then getting an error when going to the site with a real pin. Here are the steps:
Go to chrome://net-internals/#hsts in chrome
Add your domain and use the example sha256/7HIpactkIAq2Y49orFOOQKurWxmmSFZhBCoQYcRhJ3Y= as the Public key fingerprints
Go to your domain
You should see an error message like the one below.
This works for testing it from one application. What about other browsers like Firefox?
Another idea for testing HPKP is to keep the pin but change the certificate used to one that doesn’t match the pin. This requires buying another certificate because it has to be CA approved. You can either modify your domain’s headers to change the cert or you can try to use MITM and the CA cert instead of the self-signed cert.
Let me know how you test HPKP, it’s tricky business and isn’t easy to do. There are online tools like report-uri’s HPKP analyser that can compare the certificate and the pin, but will not test the browsers usage of the pin. A lot of people think it’s good enough, but that’s not true. If browsers do not properly block other requests without a correct pin, then there is no point in using HPKP.
Before we try to add a HPKP header, let’s review from last week. I made a post about what HTTP public key pinning is. It’s a fingerprint that browsers use to compare certificates can warn the user if the certificate is from a different source, even if it’s trusted or from the same server. If that doesn’t make sense, check out the link to the previous post.
A Public-Key-Pins header looks like this:
Public-Key-Pins: pin-256=”…”; pin-256=”…”; max-age=###; include subdomains; reporting-url=”…”;
The required parts are pin-sha256, and max-age. The pin is where you add the actual pin and the age is the number of seconds the the pin is kept on a browser. It’s good to note that the policy will not work with only one pin in the header, at least one backup pin is required. I’ll go over creating pins further below. For testing keep age short like 10 seconds (I did 500), otherwise standards recommends a two month period (5184000 seconds). Both include subdomains and the reporting url are optional. the include is just a boolean, I set it to true just to be safe. I don’t have reporting set up so I do not include that url in my header.
Getting a pin
In order to get a pin, you first need a certificate. So make sure you have TLS enabled on your site. Basically, you need to make sure https://your-website works and you get the green lock. An easy way to get TLS set up is Let’s Encrypt, I wrote about that in my TLS blog post a few months ago, scroll to the Try it! section. Once you have a certificate, you have two choices. First, you can use a nice tool and grab the pin generated for you (don’t worry, you’ll be sending this hash to everyone anyways, it’s ok if someone else generates it for you… as long as it’s valid). All you have to do is type in your website’s domain / URL into the tool.
The other option is to use openssl on your server to encode and hash the value yourself. First you need to find the certificate. Since I use Let’s Encrypt. Mine was in /etc/le and called private.pem. All you need to do from there is run the command below:
Be sure to change private.pem to your filename, all else should stay the same. This won’t modify anything but will echo your new pin.
Getting a backup pin
Backup pins are a little tricker to create. First off, the reason we need a backup pin is we are limited what’s accepted by the browser. It’s a good idea to give the browser more then a single option just in case the private key from the first pin gets compromised. This method will generate a new CSR and private key for the same TLS certificate. Here are the steps. It’s very similar to generating the first pin, you just need to create the csr and key first.
Step 1: Generate a new private key openssl genrsa -out name.your.backup.key 4096
– 4096 is the bits used (I think), 2048 or 4096 is the only two values you should be using.
Step 2: Generate a CSR openssl req -new -key name.your.backup.key -sha256 -out name.your.backup.csr
– There are fields you will be prompted to fill in, do so to the best of your ability.
Step 3: Generate the second pin openssl x509 -pubkey < name.your.backup.csr | openssl pkey -pubin -outform der | openssl dgst -sha256 -binary | base64
So now you should have both pins*, for future reference, I would recommend writing a bash script to run these commands and then echo the pin values in the format of the header so it’s possible to copy and paste the header into a configuration file. Can you think of a better way? *This is also a good time to mention that you will have to redo this EVERY time you change your TLS certificates, if you’re being super-cyber-secure, that means every three months. That means the more automated, the better!
Adding a HPKP header
Now, depending on your sever, there are a lot of different ways to add a header to your requests. If you’re using apache, add a line to the vhost file. If you’re using nginx, there’s a similar line you can add to the configuration. If you’re using a fancy load balancer that’s too expensive for a guy running a blog off a VM, then you can use a rule designed for that load balancer.
I’d be worried to tell you too much about my architecture but the response header already straight up tells you I run on an apache server… [note to self: remove that next time I’m digging in header configs and then update this blog post]. Since we are all caught up on my server’s inner workings. I’ll go through the steps to add this to your apache config.
sudo a2enmod headers
– You need to have permission to modify headers from the vhost file
sudo nano/vim/vi [your favorite text editor] server.conf – Edit your vhost file and add the line in step 3
Step 3: Header always set Public-Key-Pins “pin-sha256=\”base64+primary==\”; pin-sha256=\”base64+backup==\”; max-age=5184000; includeSubDomains” – Save the vhost file
sudo service apache2 restart
Check your HPKP header is there
Your header should be there, if everything works well, your browser won’t yell at you about your website. Nothing will change, yay you’re done! But wait, how do we know? Well.. seeing that the pin is there is easy, check your request headers and you can see the pin. Or use another report-uri tool that tells if the hash is valid and that you have your backup pin available.
In a future blog post, I’ll be discussing a way to actually test the pins and see if the browser will respond well. Right now, I’ve only read about a method that requires purchasing a second certificate. To avoid that, I’m looking into any other possibilities such as using a MITM technique. The issue with that is modern browsers are doing a similar check even without the pins, so at this point in time I’m unsure if it’s the default browsers functionality or the new HPKP header.
If you have any issues with the steps, please comment below. Let me know if you want to see more about securing HTTPS protocols!
On a project I’m involved with, a scanner has picked up a low issue where the HTTPS is missing HTTP Public Key Pins (HPKPs). If you’re like me, you’re probably thinking what the heck is HPKP? Well, I did a little bit of research and got it working on my personal website, I’ll share my struggles below so you don’t have to follow my footsteps.
Our browser stores a list of places that are accepted TLS/SSL certificate providers. If any website used a certificate that has one of those certificate authorities (CAs) as the root, then the browser will trust the certificate and not flag any errors. If only we had a way to check which CA was being used by a website, and if that CA ever changed, browsers would notify the end user.
Public Key Pins are a “fingerprint” that match a server’s TLS certificate. More specifically it’s a base 64 encoded SHA256 hash of the certificate. The browser uses this HPKP to validate the certificate, since a different cert wouldn’t be able to have the same pin. Browsers will store the HPKP on the first visit to a website, and will compare that stored pin to the pin attached to all future requests and make sure that the TLS certificate in the same. If they were different, it’d mean that either the certificate changed or (more likely) there is a man in the middle attack that is routing all the traffic and modifying the certificate chain.
Pretty cool, right? Adding one header and your client’s browsers will start complaining anytime someone changes the TLS certificate. That’s a pretty nice security feature for a small amount of work. Well, it would be, if it wasn’t redundant. From what I’ve seen while trying to test HPKP headers, modern browsers like the latest versions of Chrome and Firefox do a similar check with the TLS certificate alone, I think they compare the certs to see if they’re different or if the certificate’s CA is trusted by the browser. Why do we need a hash if the entire TLS certificate will be compared by the browser?
Why bother with HPKP
Honestly, I’d say to add HPKP because of a few reasons. I mentioned earlier that the latest versions of two browsers do this check, what if it’s older? It’s also better to be redundant and have two security controls then to have one and say “eh, it’ll work, well, it should”. At the very least, add the pin to reduce the issues found by scanners so you can be one step closer to having ZERO ISSUES! Which we all know is application security’s wet dream. If you don’t want to do it for the “compliance” aspect, you can always do it so Qualys’s SSL Lab will cheer for you.
When I was doing my research on pinning, I went to a few different sources. All were great and very helpful. If after this you still don’t understand htp public key pinning, feel free to leave a comment below with questions (I’ll add more, I promise!) and check out the references below.
In my normal fashion, I’m going to start this blog post with a little intro to cover my butt. Recently at work, I’ve been tasked with learning about Transport Layer Security or TLS. This blog post is my own thoughts and is not 100% accurate, but I hope you get the idea as well as I do.
What is TLS?
Well, as I said above, TLS is Transport Layer Security. It’s the encryption used by clients and servers to encrypt messages sent between the two. Some of you may remember SSL, or Secure Socket Layer. That was the predecessor of TLS. Since then, SSL has been proven to be insecure; don’t ask me how, but I do know that one example of abusing SSL is the POODLE attack. Wikipedia says “all block ciphers in SSL; and RC4, the only non-block cipher supported by SSL 3.0, is also feasibly broken”. That’s about as far back as I went with the history of TLS. All I took away from SSL is that it’s deprecated and no one should use it.
TLS encryption is a complex combination of keys, certificates, and ciphers… I’ll try to explain this but don’t punch your screen or write me an angry email if I’m wrong. Before any website traffic is sent between a client and a server, they must agree on a key and a cipher. The cipher encrypts the messages, the key decrypts them. The client and server exchange keys using a fancy algorithm and decide on a cipher that both know how to use. Certificates are like ID’s that prove a TLS connection is valid.
Why does TLS matter?
Have you ever looked at a packet as it goes across your network? If not, I suggest looking at mitmproxy. A quick rundown for the non-network people. A request over regular unencrypted HTTP traffic is visible to everyone on the network, a hacker can grab all of the information in your request, like your password, user tokens, credit card, email address, etc. An example of a HTTP request is below from shakedos.
This is mitmproxy catching a pair of authorization tokens for the Tinder “dating” app. Now that the hacker has your auth token, they can inject it into their own request and gain access to your account. This is one example why it’s dangerous to send sensitive or private information over encrypted channels. So to protect your user’s information, encrypt your traffic! If this was all sent over HTTPS using TLS, then the information would not be decipherable without the client’s key.
Why everyone should use encryption
This isn’t exactly correct… but I agree with the principle. Somewhere, probably twitter, I read a couple articles linked from cryptographers about the point of early cryptography. Spies would email home over encrypted channels. While the message is unreadable, it is still sent out over the network. The only difference is instead of seeing “token: abc123” it’s “W$JT#N:SNV120934”. Network monitors that expect to see unencrypted requests can flag a request with unknown contents and trace the source. aka if our friends, the North Koreans caught an encrypted e-mail and trace it to a coffee shop, chances are they can find our spy. Did that make sense? This is the reason why everyone should encrypt things, so ALL the internet traffic is a jumbled encrypted mess, and that monitors can’t single out a specific source of an encrypted message.
Things to shoot for
There’s a lot of configuration options for TLS, even the keys are complex. The main thing to remember when setting up TLS keys is that it’s important to have a minimum of 128 bits of security. Which means RSA keys with 2048 bits or ECC keys with 224-255 bits. You can also have larger keys, but going larger is pointless if you use Elliptic Curve ciphers (ECC). In order to save time, OpenSSL stores keys on the server from clients that have already shared keys with it. These keys are encrypted using AES128-CBC and that is only as strong as 128 bits of security. Because of this, it’s also good idea to restart your server nightly in order to reset the cache of stored values.
Certificates are also important, the signature should use SHA-256 or better. Use a name that fully represents the name of the domain. Don’t use “www” or “localhost”. Certificates signed with a certain encipherment (or signature) will use the same cipher to encrypt messages, so ECDSA certificates will use ECC ciphers. Server certificates should be X.509 version 3, older ones are deprecated. Use certificates issued by a CA, not self-signed certificates. They publish information that traces a line of authority and that can be used to authenticate the certificates. If your site uses personal identity information (PII), then it’s a good idea to have a EV certificate. It’s a special TLS certificate with your companies name on it to “increase user confidence in that the site is who it claims to be”.
When defining your cipher list, it’s a good idea to only use the latest recommendations from NIST. Thanks to Mozilla, A good example of a modern cipher list is …
… This cipher list gives priority to the first ones defined, so more secure cipher are used compared to weaker versions. The last line prevents broken or deprecated ciphers from being used. Again, SHA-256 or higher is recommended and SHA-1 shouldn’t be used. It’s important to note that AES-128 is different and I believe it’s ok to use ciphers with that in it’s name. Favor GCM ciphers over CBC ciphers because it is an authenticated encryption with associated data.
Try it out!
You’re probably thinking “putting encryption on my server is hard, and if I’m going off your blog, I have no idea HOW to do it, just why I should”. However setting up TLS on your sever can be really easy thanks to a new software called LetsEncrypt. Depending on your server, it’s either really easy or kinda easy to set up. I have a Ubuntu VM and all I had to do was update Python to 2.7.9, run apt-get-update, clone their github repo, and finally run ./letsencrypt-auto certonly –apache -d blog.greenjam94.me. The software runs through it’s own installation of dependencies, it’ll open a GUI where you submit a email address and choose if you want to use http and https or just https, I suggest just https and being more secure. After that it’s all set up for you. Lets Encrypt sets up all the configuration you need, even for applications like WordPress Blogs. However I did have to change some of my links on each webpage, in a rush I made my connections to some CDNs (hosted code that I borrow) as http and that gave the browser mixed messages.
The downsides to using LetsEncrypt is that it’s a free program that obfuscates your online intentions. However, that also means it’s a free tool for hackers to hide their treasure chests of evil goodies. Google has started to notice this and is flagging some of the LetsEncrypt certificates, so chrome might tell your website visitors that you use a suspicious TLS certificate. I haven’t seen proof of that, but infosec friends warn me of this. It’s also limited to 6 month certificates, they expired pretty fast. All you need to do in order to fix this is re-run the command from above, but it’s still maintenance that needs to be accounted for. I’m sure there’s one or more sysadmins out there that have written a bash script or cron job to do just that.
There are a few ways to test servers for TLS. You can use free labs provided online, like Qualys SSL labs. Another option is to use tools like OWASP’s O-Saft. If you are a sysadmin or just love to play in the console; bughunter recently blogged about a bash script that uses OpenSSL to check websites for TLS. I’ve used all three and it really comes down to what you’re looking for. All can confirm if you have it installed correctly, Qualys will give you a report card. O-Saft will give you some detailed information. Bughunter’s Bash Script will give you a list of all the available ciphers on that server.
What TLS doesn’t solve
TLS isn’t a cure for cancer, but it is important. While TLS provides connection encryption, it doesn’t equal trust, validation, or security. TLS makes it harder for hackers on the network to see what packets you’re sending, however it’s still possible for them to do a man in the middle (MITM) attack. There are additional ways to prevent that like disabling TLS renegotiation and enforcing HSTS. TLS isn’t a replacement for a good implementation of user authorization or input validation. For example, TLS can’t stop a user from using SQL injection to drop your database tables if they can bypass authentication and impersonate one of your users. Their injections will just be encrypted as it comes flying towards your database. Just because a site has TLS doesn’t mean it should also be automatically trusted. Don’t give a site your SSN or credit card just because you see a lock in the URL window.
Here’s a quick wrap up. SSL is old, don’t use it. TLS encrypts regular website traffic and makes it safer. Everyone should use encryption so the few that need it don’t get singled out. 128 bits of security is ideal. Installing TLS is easy with LetsEncrypt, it’s not a perfect fit and there are downsides but it’s a quick start. Test your TLS make sure it works. I hope you liked this blog post, I really enjoy this topic and will probably be doing a presentation on it at Misec in the coming months. I’ll be learning more and revising this as I do, please reach out to me if you have any advice.