Chocolatey and security

August 20, 2015 2 comments

I decided to use Chocolatey to install applications to my freshly installed Windows 10 machine. My original idea was to use OneGet, which is the new package-manager manager introduced in Windows 10, and which has a preview Chocolatey provider, however I didn’t have much success with it, so I stepped back and used Chocolatey directly.

The first step is to install Chocolatey, which is very simple, just run the 1-line script from the homepage in an admin command prompt:

C:\> @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString(‘’))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

With this single command you actually do three things:

  1. You download a PowerShell script.
  2. You run the downloaded script with administrative privileges.
  3. You extend your PATH environment variable.

I don’t know about you, but step 2 freaks me out. And this can be a good time to take a deep breath and think through what you are going to do: you will install applications from unknown source to your machine! When you execute a command like

choco install adobereader

you have no idea what it will download and install to your computer.

So what can you do?

First, install only those choco packages that are approved by moderators. Moderation is a manual process, and it may have human errors, but it is a validation after all. In the detail page of a package that was approved by a moderator, for example the Adobe Reader package, you can see this in a green box:

This package was approved by moderator gep13 on 6/11/2015.

If a package was not checked by a moderator, for example Notepad2, you can see this in a red box:

This package was submitted prior to moderation and has not been approved. While it is likely safe for you, there is more risk involved.

If you already opened the detail page of a package, you better read everything you can find there. For example in the 7-zip package page you can find this warning:

NOTE: The installer for 7-Zip is known to close the explorer process. This means you may lose current work.

Here you can also find useful options too, for example the Firefox package allows you to specify the language of the application to install:

choco install Firefox -packageParameters "l=en-US"

If you scroll down, you can find references in the comments which may make you choose not to install a certain package. For example OpenCandy is mentioned in the comments of the CDBurnerXP package, and you can probably also recall installers that install unwanted software if you just blindly go through them with next-next-finish.

In the middle of the page you can find the installer PowerShell script as well, which might be worth to take a look at, because there you can see, what EXE or MSI is downloaded and from what server. In case of the Adobe Reader package this script is only 6 lines, you can clearly see the URL in the middle, and you can very easily understand what is actually happening. One cannot say the same about the 117-line script of the Firefox package, or the script of the Node.js package which is only 1 line, but refers to two other packages.

In summary, I don’t feel Chocolatey can be used securely, it is all about trust. You can do these manual checks, you can update your operating system, install antivirus and antimalware, deny access to unwanted hosts, but at the end of the day you will run code from an unknown source, which – at least from security perspective – doesn’t seem to be a good idea.

I installed these packages, and they didn’t burn down the house (yet):

firefox -packageParameters "l=en-US"

So what do you think, do you use similar tools, are you brave enough to use Chocolatey, and if yes, what other packages do you install?


Technorati-címkék: ,,
Categories: Security, Windows Tags: ,

Use Bitlocker without TPM

Contrary to the popular belief, you can use Windows’ built-in Bitlocker to encrypt your hard disk content, even if you don’t have a TPM chip in your computer. You can easily encrypt your data disks; you just have to enter a password, and you have the option to save your recovery key to file, a USB drive or even to the cloud. However when you try to encrypt your OS volume with Bitlocker, you will see the following error message:

This device can’t use a Trusted Platform Module. Your administrator must set the “Allow BitLocker without a compatible TPM” option in the “Require additional authentication at startup” policy for OS volumes.


It is quite a good error message, because it not only states what the problem is, but also helps you to recover from it. If it would say exactly where you can find that setting, it would be perfect!

If you search for the word “policy”, you will find Local Security Policy, but it is not what you really want. What you need is the Group Policy Object Editor, even if your computer is not domain joined.

Start a Microsoft Management Console (mmc), and add the Group Policy Object Editor snap-in (click for the full image):


Then within then Local Computer Policy –> Computer Configuration –> Administrative Templates –> Windows Components –> BitLocker Drive Encryption –> Operating System Drives branch you can find the setting the error message referred to:


Open the setting, then first select Enabled, and then click the Allow Bitlocker without a compatible TPM checkbox below:


After you have closed all windows, you have to refresh your security policy, which you can do without restarting your computer by running gpupdate from the command prompt:


Now you can encrypt your OS volume just as you did with your data disks.


Technorati-címkék: ,,
Categories: Windows Tags: , ,

Windows 10 install: UEFI, secure boot, USB, GPT, error

I’ve tried to install Windows 10, so I have downloaded the ISO from MSDN, and used the Windows USB/DVD Download Tool to write it to a pendrive. However, my computer refused to recognize the installation media, so I didn’t have to option to boot from it. In my BIOS the boot options were set to UEFI boot ON, secure boot ON, which worked well for the previous Windows 8.1, but now these settings caused the problem. If I changed that setting to Legacy boot ON, secure boot OFF, the boot from USB option appeared, and the Windows installer successfully started. Unfortunately this is not the happy end of the story, because the installer later stopped with this error:

Windows cannot be installed to this disk. The selected disk is of the GPT partition style.



After some googling I’ve found several methods to convert a GPT partition to MBR (with loosing all data on the whole disk, or by using a 3rd party boot CD), but fortunately there is a much easier method.


The much easier method

The real issue is that UEFI boot does not work with NTFS pendrives (at least not in my machine), so the solution is to

use a FAT32 pendrive.

Unfortunately the Windows 7 USB/DVD Download Tool always reformats the pendrive to NTFS, even if it was formatted previously, so you need another tool to prepare the installer pendrive.

You can use diskpart for example, which is a built-in command line tool in Windows.

Let’s start it:


Get a list of the available drives:

list disk

You will see the pendrive in the list (because hopefully you previously inserted it), you can recognize it from its size. Tell the tool that you want to work with that disk:

select disk 2 (use the correct number instead of 2)

Remove every content from the disk (you will lose your existing data on the pendrive!):


Create a new FAT32 partition:

create partition primary
select partition 1
format quick fs=fat32

Now you can quit from diskpart:


The last step is to copy the installer to the pendrive. First, mount to ISO, and copy its content to the disk, for example with xcopy (in this example D: is the mounted ISO drive, F: is the target pendrive):

xcopy d:\* f:\ /s /e

With that your BIOS will hopefully recognize the pendrive, so you will be able to boot from it, and you will not have any problem with the GPT partition, even if UEFI and secure boot is turned on.


Technorati-címkék: ,,
Categories: Windows Tags:

Write your Node.js app in C# with Roslyn

When the Microsoft Managed Languages team announced in December 2013, that they replaced the existing C# and Visual Basic compiler, and they use a new compiler to create the daily builds of the next version of Visual Studio, it became obvious to all developers, that something big is coming. The new tool, codename “Roslyn”, has far more capabilities than the previous csc.exe and vbc.exe, so it is not a coincidence that .NET Compiler Platform became its official name.

Roslyn is not only about converting our source code to an executable format, we already had a excellent tools for that for many years. The goal of Roslyn is to open the power of the compiler to developer and development environments (such as for Visual Studio), so this is a compiler-as-a-service solution.

Why do we need that? Compilers are very complicated, and during their execution they collect a huge amount of information about our source code:


It would be a huge mistake, if we would lock that information into a single tool, instead let other tools benefit from this knowledge. The compiler is our only tool that really understands our source code, as it obviously knows from every single character in our source code whether it is code, data, comment etc. Using this knowledge, we can build much better tools, for example many features of the Visual Studio 2015 would never exist without Roslyn.

Here you can see the Roslyn architecture (click for a larger image):


If you want to understand all little boxes feel free to visit the project home page, I just want to point out that you can find here everything from understanding the source code to generating the executable output, and because it is a platform, naturally you have API for everything.

The key item is the Syntax Tree highlighted with yellow, which is the inner representation of our source code, created by the Parser. The API for that is the Roslyn Syntax API, which allows you not only to analyze your source code from source code, but you can even change that on the fly. Let’s take this simple expression as an example:

Regex.Match("my text", @"\pXXX");

Roslyn builds the following syntax tree from that:


(I borrowed the example from the Use Roslyn to Write a Live Code Analyzer for Your API article of the MSDN Magazine, which shows you how to build a Visual Studio plugin on top of Roslyn.)

It is totally up to you, whether you want only analyze or modify this tree.

This has been also recognized by the TypeScript team, and they are using Roslyn from version 1.3 to provide the necessary data for several Visual Studio IDE features. As a result of that the architecture of the TypeScript compiler became much cleaner and much more understandable:


For us the two important components are in the lower box: the Parser and the Emitter. Parser is responsible for building the syntax tree – in this case from the TypeScript source –, and Emitter is responsible for generating the compiler output based on the syntax tree – in this case the JavaScript (.js), the definition (.d.ts) or the source map ( file.

It is important to note that the architecture of Roslyn and TypeScript are pretty similar: first they build a syntax tree, and then they generate the output based on the tree. For Roslyn this is C# –> tree –> IL, for TypeScript it is TypeScript –> tree –> JavaScript.

Because the two trees are similar, we can combine them, and now we have the following solution:


This means we can compile C# to JavaScript with Roslyn and TypeScript!

The huge benefit of this architecture is that the input is the original C# source code (in contrary to the JSIL project for example that compiles Common Intermediate Language to JavaScript), which contains way more information (e.g. inheritance, scoping, etc.) that are lost in the output of the C# compiler. This gives us the opportunity for more efficient code optimization!

Let’s take the Raytracer demo from the JSIL project as an example. The original C# source is 429 lines, the JSIL-generated JavaScript is 793 lines. You could say, that at least it runs, however the memory management is far from optimal:


Processing the same source code with the Roslyn Parser + TypeScript Emitter combo you can get the following results:


We also measured the CPU utilization and Visual throughput (FPS) with the UI Responsiveness tool in Internet Explorer Developer Tools (F12), and we got better results for both metrics as well.

Obviously better memory management and better performance do not come free, the generated code is bigger, in this case 1844 lines of code. The significant increase is the result of the richness of the syntax tree: in contrast to the IL code, it contains information about the classes and member visibility, which can be translated only to complex JavaScript code, but if we do (and the TypeScript Emitter can do that), than we can get better performance with the price of more code to download. We experienced the same result when we analyzed other applications.

This method works not only with browser based applications, but also with native JavaScript apps, for example with your Node.js app. These are the required steps:

  1. Install the Node.js Tools for Visual Studio plugin, which gives you everything you need to develop Node.js apps with Visual Studio.
  2. Download and install the Node.js with C# plugin from the Visual Studio Gallery (coming soon).
  3. After installation, you will see a “Node.js application” project template within the C# project templates. Create your new project with that template.
  4. Implement your application in C#, and of course you can use Visual Studio for that.
  5. You can debug and run your code as usual, because the project template contains the necessary MSBuild targets, that compiles your code with Roslyn and TypeScript, and then passes it to the Node.js Tools for Visual Studio which in turn runs it with Node.

We are looking for beta testers! Before publishing the Visual Studio plugin, we would like to do a broader test and collect feedback from you. If you would like to try our tool, please read these guidelines and leave a comment below, and I will contact you with the download link. Thank you!



This article hit the top of the node.js subreddit:



Categories: .NET, WebDev Tags: , ,

Error subclasses may lose their message property

You have probably already seen a code like this:

try {
  throw new Error('Oh, nooooo!');
} catch (e) {
  console.log(e.message);            // Oh, nooooo!
  console.log(e instanceof Error);   // true

If you create many of those blocks, sooner or later you will decide that you are going to use custom error classes, even in JavaScript. It may seem to be a great idea to implement them this way:

function OhNoooError() {, "Oh, nooooo!"); = "OhNoooError";

But you may be surprised by the results:

try {
  throw new OhNoooError();
} catch (e) {
  console.log(e.message);                 // undefined
  console.log(e instanceof OhNoooError);  // true
  console.log(e instanceof Error);        // false

So the error you catch is not a classic error (does not inherit from the Error base class), and it does not have a message property! Oh, nooooooooooo!

Here is one way to fix it:

OhNoooError.prototype = Object.create(Error.prototype);
OhNoooError.prototype.constructor = OhNoooError;

function OhNoooError() {
  this.message = "Oh, nooooo!"; = "OhNoooError";

And now you have a much nicer output:

try {
  throw new OhNoooError();
} catch (e) {
  console.log(e.message);                 // Oh, nooooo!
  console.log(e instanceof OhNoooError);  // true
  console.log(e instanceof Error);        // true

The point is that you have to set the message property explicitly in the derived class – even if you use the extends keyword in CoffeeScript to do the inheritance magic.


Technorati-címkék: ,
Categories: WebDev Tags: ,

Hey WebStorm, don’t search in the node_modules folder!

How annoying is that whatever you search for in a node.js project, WebStorm gives you thousand hits from the node_modules folder?

Calm down, and go to the Project window, right click on the problematic folder, and click Mark Directory As –> Excluded:


The folder won’t disappear from the Project window, but won’t pollute your search results any more.

You can undo this in the same place: Mark Directory As –> Cancel Exclusion



Technorati-címkék: ,
Categories: WebDev Tags: ,

I asked for a .vs folder and the Visual Studio team gave it to me

You probably noticed, that Visual Studio creates new files in your solution folder whether you like it or not. One of those files is the Solution User Options file with the .suo extension, which contains settings specific to the given developer machine. You can delete it, but it will quickly grow back.


You have to be careful with these per-developer or per-machine setting files, especially because you should not add them to source control. Not a coincidence that *.suo is the first item in the .gitignore file recommended for Visual Studio projects.

Unfortunately .suo is not the only file like that, you can see many of these polluting your project root when you are using different project types. It would be much cleaner if all those files would live in a single folder!

Thankfully it is already solved in Visual Studio 2015, and the IDE puts them into a separate directory which is called .vs similarly to other development environments:


The .vs is a hidden folder so you have to enable the Show hidden files, folders, and drives option in Windows Explorer if you want to peek into it (but why would you?). Currently (Visual Studio 2015 CTP6) the .suo file and the Visual Basic/C# IntelliSense database files are living in this folder and its subfolders, but in the future releases more and more files will be moved here, and hopefully this practice will be followed by add-in developers as well. If you are upgrading an existing solution, the old files will not be deleted automatically, so your settings are not lost if you open the project later with an earlier version of Visual Studio.

The best thing in this feature is that this folder was not invented by the Visual Studio developer team. It is there because I asked for it. I and 2822 other Visual Studio users on the Visual Studio UserVoice page. The IDE team looked at it, thought it through, accepted it and implemented it.

It feels so good, when developers listen to the end-users.


Technorati Tags:
Categories: Visual Studio Tags:

Get every new post delivered to your Inbox.

Join 32 other followers

%d bloggers like this: