Tag Archives: security

Chocolatey and security

I decided to use Chocolatey to install applications to my freshly installed Windows 10 machine. My original idea was to use OneGet, which is the new package-manager manager introduced in Windows 10, and which has a preview Chocolatey provider, however I didn’t have much success with it, so I stepped back and used Chocolatey directly.

The first step is to install Chocolatey, which is very simple, just run the 1-line script from the chocolatey.org homepage in an admin command prompt:

C:\> @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString(‘https://chocolatey.org/install.ps1’))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

With this single command you actually do three things:

  1. You download a PowerShell script.
  2. You run the downloaded script with administrative privileges.
  3. You extend your PATH environment variable.

I don’t know about you, but step 2 freaks me out. And this can be a good time to take a deep breath and think through what you are going to do: you will install applications from unknown source to your machine! When you execute a command like

choco install adobereader

you have no idea what it will download and install to your computer.

So what can you do?

First, install only those choco packages that are approved by moderators. Moderation is a manual process, and it may have human errors, but it is a validation after all. In the detail page of a package that was approved by a moderator, for example the Adobe Reader package, you can see this in a green box:

This package was approved by moderator gep13 on 6/11/2015.

If a package was not checked by a moderator, for example Notepad2, you can see this in a red box:

This package was submitted prior to moderation and has not been approved. While it is likely safe for you, there is more risk involved.

If you already opened the detail page of a package, you better read everything you can find there. For example in the 7-zip package page you can find this warning:

NOTE: The installer for 7-Zip is known to close the explorer process. This means you may lose current work.

Here you can also find useful options too, for example the Firefox package allows you to specify the language of the application to install:

choco install Firefox -packageParameters "l=en-US"

If you scroll down, you can find references in the comments which may make you choose not to install a certain package. For example OpenCandy is mentioned in the comments of the CDBurnerXP package, and you can probably also recall installers that install unwanted software if you just blindly go through them with next-next-finish.

In the middle of the page you can find the installer PowerShell script as well, which might be worth to take a look at, because there you can see, what EXE or MSI is downloaded and from what server. In case of the Adobe Reader package this script is only 6 lines, you can clearly see the URL in the middle, and you can very easily understand what is actually happening. One cannot say the same about the 117-line script of the Firefox package, or the script of the Node.js package which is only 1 line, but refers to two other packages.

In summary, I don’t feel Chocolatey can be used securely, it is all about trust. You can do these manual checks, you can update your operating system, install antivirus and antimalware, deny access to unwanted hosts, but at the end of the day you will run code from an unknown source, which – at least from security perspective – doesn’t seem to be a good idea.

I installed these packages, and they didn’t burn down the house (yet):

adobereader
7zip
emet
fiddler
filezilla
firefox -packageParameters "l=en-US"
gitextensions
google-chrome-x64
join.me
keepass
nodejs
paint.net
silverlight
skype
sysinternals
vlc

So what do you think, do you use similar tools, are you brave enough to use Chocolatey, and if yes, what other packages do you install?

 

Technorati-címkék: ,,

Use Bitlocker without TPM

Contrary to the popular belief, you can use Windows’ built-in Bitlocker to encrypt your hard disk content, even if you don’t have a TPM chip in your computer. You can easily encrypt your data disks; you just have to enter a password, and you have the option to save your recovery key to file, a USB drive or even to the cloud. However when you try to encrypt your OS volume with Bitlocker, you will see the following error message:

This device can’t use a Trusted Platform Module. Your administrator must set the “Allow BitLocker without a compatible TPM” option in the “Require additional authentication at startup” policy for OS volumes.

bitlocker-1-tpm-error_thumb[1]

It is quite a good error message, because it not only states what the problem is, but also helps you to recover from it. If it would say exactly where you can find that setting, it would be perfect!

If you search for the word “policy”, you will find Local Security Policy, but it is not what you really want. What you need is the Group Policy Object Editor, even if your computer is not domain joined.

Start a Microsoft Management Console (mmc), and add the Group Policy Object Editor snap-in (click for the full image):

bitlocker-2-mmc-add-snapin_thumb[1]

Then within then Local Computer Policy –> Computer Configuration –> Administrative Templates –> Windows Components –> BitLocker Drive Encryption –> Operating System Drives branch you can find the setting the error message referred to:

bitlocker-3-policy-path_thumb[1]

Open the setting, then first select Enabled, and then click the Allow Bitlocker without a compatible TPM checkbox below:

bitlocker-4-policy-setting_thumb[1]

After you have closed all windows, you have to refresh your security policy, which you can do without restarting your computer by running gpupdate from the command prompt:

bitlocker-5-gpupdate_thumb

Now you can encrypt your OS volume just as you did with your data disks.

 

Technorati-címkék: ,,

HTTPS: Do. Or do not. There is no try.

Last summer I had the chance to visit Bologna, Italy, and I was happy to see that there is free wifi service in the airport. I probably had to be suspicious from the beginning, but it all started to be strange for me, when I saw this “welcome” page in the browser after connecting:

airport-ssl-warning

According to the message, the site’s security certificate is “a bit” invalid. Actually it could be more invalid only if it were already revoked.

If you decide to continue you will see this website of the airport:

airport-webpage

Really original design. Right, Bologna is not a huge metropolis, but I’m pretty sure it would be easy to find a student of the local university, who could click together a prettier website during a weekend.

This page made me curious and I could quickly find out, that the website has nothing to do with accessing the public internet.

There are many unusual and suspicious aspects here:

  • Certificate
  • IP address
  • Design
  • Phone number collection

This was the moment when I stood up and started to look for Troy Hunt and his Pineapple 🙂

Most of these concerns of me could be swept away with a single valid SSL certificate. But these invalid certificates do not guarantee anything, except nervous average users and pro users who are worrying about the security of their data.

If you do HTTPS, please do it correctly. Or don’t do it at all. Don’t try.

 

Technorati-címkék: ,

Mixed content warning

It is so sad, when a webpage falls apart in the browser, like this one in Chrome:

mixed content chrome

Why is that? Oh, isn’t it obvious? The explanation is there, let me help you:

mixed content chrome warning small

It is called the mixed content warning, and although it is a warning, it is very easy to miss. Let’s see the same page in Firefox:

mixed content FF

Do you get it? Here it is:

mixed content FF blocked small

Internet Explorer is not so gentle, it immediately calls the user’s attention:

mixed content warning

Although you don’t have to search for a shield icon (which is one of the most overused symbol in the IT history) here, because you immediately receive a textual message, the situation is not really better. Average users don’t understand this message and the real cause behind it. What’s more, not only users don’t get it, but also web developers don’t understand the security consequences, otherwise there won’t be any page with this warning at all.

It is so easy to get rid of the mixed content warning: just ensure that if you load the page via https:// protocol, then you must load all referenced content (yes, all of them) via https as well. If you have a single http:// URL in your page, then the browser will trigger the mixed content warning. If you load content from a third party domain and you cannot use relative URLs, then start your reference URLs with “//”, which tells the browser to use the same protocol which was used to load the page itself. It is called the “protocol relative”, “scheme relative” or “scheme-less relative” URL, and you can find its description already in the RFC 3986 (dated January 2005) which specifies the URI syntax. Thankfully all browsers understand it as well.

It is time to fix these pages, and let the browsers sooner or later completely block these poorly implemented pages.

 

Technorati-címkék: ,,

How many requests do you need to authenticate?

It sometimes happens that a webservice requires Basic authentication, which is usually not an issue in .NET clients thanks to the Credentials property on the generated proxy class where you can set the user name and password:

MyService ws = new MyService
{
    Credentials = new NetworkCredential( "user", "password" )
};

You may think that after setting this property the client will send an HTTP request which contains the authentication data, but unfortunately things happen differently. First a request is sent without the Authorization header, and if the server returns a 401 Authenticate response, a second request is submitted which will contain the user name and password in the header. So the result is doubled traffic.

If you don’t like this (and why would you), you can use the PreAuthenticate property which forces the client to always send the authentication header without waiting for a 401 response:

MyService ws = new MyService
{
    Credentials = new NetworkCredential( "user", "password" ),
PreAuthenticate = true };

 

Technorati-címkék: ,,

Remembering remote desktop passwords

It is really annoying that the remote desktop client sometimes remembers only your login name but not your password, because:

Your credentials did not work

Your system administrator does not allow the use of saved credentials to log on to the remote computer COMPUTER because its identity is not fully verified. Please enter new credentials.

rdp-credentials

To fix this start the Local Group Policy Editor (gpedit.msc), and navigate to this branch: Computer Configuration –> Administrative Templates –> System –> Credentials Delegation. Here open the Allow delegating saved credentials with NTLM-only server authentication option and set it to Enabled.

Click the Show… button in the Add servers to the list option and add the servers you want to apply this setting to. You must use the TERMSRV/computername format to specify a single computer, or you can use TERMSRV/* to refer to all servers.

 

Technorati-címkék: ,

Top Cyber Threat Predictions for 2014

On the Microsoft Security Blog, Tim Rains, the director of the Trustworthy Computing division, published the top 8 cyber threat predictions for 2014:

  1. Cybersecurity Regulatory Efforts Will Spark Greater Need for Harmonization
  2. Service-Impacting Interruptions for Online Services Will Persist
  3. We Will See an Increase in Cybercrime Activity Related to the World Cup
  4. Rise of Regional Cloud Services
  5. Dev-Ops Security Integration Fast Becoming Critical
  6. Cybercrime that Leverages Unsupported Software will Increase
  7. Increase in Social Engineering
  8. Ransomware will Impact More People

The list includes perspectives from senior cybersecurity leaders at Microsoft, so definitely worth to read them in detail.

 

Technorati-címkék:

Suppressing forms authentication redirects

One of the most terrible pain points of correctly implementing authentication is to define how to handle unauthorized requests. So for example neither unauthenticated users, nor users who are not the members of the Admins group can request the /admin URL.

Thankfully the FormsAuthenticationModule in ASP.NET provides a built-in solution to this problem. When the module is initialized, it subscribes to the EndRequest event with the OnLeave event handler, and when the HTTP status code is 401, this event handler redirects the user to the login page. This is a very convenient feature for classic requests, however it may cause serious headaches for Ajax.

When the module redirects the request, the client receives a HTTP 302 Redirect header instead of the original 401 Unauthorized error code. As defined in the standard, the XMLHttpRequest client transparently follows the redirect and downloads the content from the URI specified in the Location header, which is usually the Login.aspx page. So when the success handler of the XHR is called, it will see the HTML markup of the login page as the result of the call, and the result code will be 200 OK which indicates success. Well, how you can handle this easily?

Until .NET 4.0  you had no other option to fix this behavior than adding a custom HTTP module to the ASP.NET pipeline. But ASP.NET 4.5 introduced a new HttpResponse.SuppressFormsAuthenticationRedirect property, which you can set to true to avoid the redirect, and force the FormsAuthenticationModule to send the original 401 error code to the browser. Because this property is attached to the Response, you cannot set it globally, but instead you have to flip this switch in every handler that requires this behavior. If you want to set it for every response, then you can implement this in the Application_EndRequest handler in global.asax.

Now it is the client’s task to handle the specific error code as required, for example by displaying a login box or a warning message in JavaScript. But you already have that logic, haven’t you?

 

Technorati-címkék: ,,

How unique is your machine key?

Most cryptography related features of the ASP.NET platform relies on the machine key, therefore it is very important to assign unique machine keys to independent applications. Thankfully the default configuration looks like ensuring this both for the validation key and the decryption key:

<machineKey validationKey="AutoGenerate,IsolateApps" 
decryptionKey="AutoGenerate,IsolateApps" />

The AutoGenerate option frees you from manually setting the keys, and the IsolateApps options ensures that unique keys are generated for every application.

But not always!

ASP.NET will definitely generate a key, but it is neither the validation key, nor the decryption key, but instead a base key (let’s call it the machine key), which is then transformed into the validation key and the decryption key. The base machine key is stored in the registry in the following location:

HKCU\Software\Microsoft\ASP.NET\4.0.30319.0\AutoGenKeyV4

Note that this key sits in the HKEY_CURRENT_USER hive, so the generated machine key belongs to the user’s that runs the application profile. This means that if you have two applications in IIS which are running with the same process identity, they will use the same machine key! One more reason why you should forget the SYSTEM, LOCAL SERVICE and NETWORK SERVICE accounts, and run all your web applications with separate ApplicationPoolIdentity. This also shows that the application pool is the real app isolation boundary.

Having two applications that share the same base key is not necessary a problem, because the keys used to validate and encrypt are created with additional transformations from this key. This transformation is determined by the second modifier after AutoGenerate. If you set IsolateApps, the runtime will use the value of the HttpRuntime.AppDomainAppVirtualPath property, which is different for two applications sitting in different virtual directories on the same website, so the generated keys will be different (which is good).

On the other hand, if you have two applications in the same path but on different websites, the value of this property will be the same for both apps. For example for an app that sits in the site root, the value is “/”, so IsolateApps does not provide you the isolation you expect!

To solve this problem ASP.NET 4.5 introduced a new modifier called IsolateByAppId, which uses the HttpRuntime.AppDomainAppId property instead of the AppDomainAppVirtualPath. The value of this property is something like this:

/LM/W3SVC/3/ROOT 

The “3” in the middle is the ID of the site in my case, and “ROOT” means that the app sits in the site root.

To summarize: the default AutoGenerate,IsolateApps setting does not necessarily provides you with unique keys, but if you host your apps in their own application pools which are running with ApplicationPoolIdentity, and you use IsolateByAppId instead of IsolateApps you can be sure, that your apps will use unique autogenerated keys.

The simplest way to test these setting is to use the localtest.me domain to create two separate websites, and then create a simple webpage that uses Milan Negovan’s code to retrieve and display the autogenerated keys.

 

Technorati-címkék: ,,

Removing chatty HTTP headers

If you look into the traffic of your ASP.NET application, you can notice the following headers in the HTTP response:

Server: Microsoft-IIS/8.0
X-Powered-By: ASP.NET
X-AspNet-Version: 4.0.30319
X-AspNetMvc-Version: 5.0

These headers have no effect on your application in any way, they are just there to provide more information to the Bing bot about your website.

Unfortunately these response headers make the attackers’ jobs easier, because if they know what platform and what version do you use, they can try only those exploits that work in this special environment. Therefore for security reasons it is a good practice to change the defaults and remove these headers.

 

Server

Broadcasting the Server header is hardwired into IIS, I’m not aware of any configuration switch you could use to remove it. You can use UrlScan, but that tool was updated last time in 2008. If you have an ASP.NET application, you can remove this header in the global.asax, just before the response leaves the server:

protected void Application_PreSendRequestHeaders()
{
  this.Response.Headers.Remove( "Server" ); }

 

X-Powered-By

The X-Powered-By header is added by IIS to the HTTP response, so you can remove it even on server level via IIS Manager:

header-x-powered-by

 

Or of course you can use web.config directly:

<system.webServer>
   <httpProtocol>
     <customHeaders>
       <remove name="X-Powered-By" />
     </customHeaders>
   </httpProtocol>
</system.webServer>

 

X-AspNet-Version

The ASP.NET runtime provides a configuration option to easily turn off the X-AspNet-Version header in web.config:

<httpRuntime enableVersionHeader="false" />

 

X-AspNetMvc-Version

To remove the X-AspNet-Version header, execute the following code when your application starts:

protected void Application_Start()
{
MvcHandler.DisableMvcResponseHeader = true; }

 

If you want to make security easier, you can rely on the NWebsec free project on CodePlex. This project besides simplifying configuration security, provides additional features for session hardening and specifically for MVC and Azure projects. These features are available independently in the form of NuGet packages as well.

 

Technorati-címkék: ,,