|Team Foundation Server: Default users||Wednesday 22nd October 2014|
When creating a new project in Team Foundation Server a set of default permissions are applied. These are reasonably restrictive and for us, mean that our developers by default can't access the code. The idea is that you then go into the project settings and assign people access, but in our sort of business this is just a painful overhead.|
To fix this you have to change the 'process template', which defines project setup. And thus, only applies to new projects, not existing ones.
To get to the template, you need to use Team Explorer in Visual Studio. On any project go to the Settings page and select "Process Template Manager".
From here you can Download the template you use (e.g. Microsoft Visual Studio Scrum 2013.2). This will create a bunch of files and folders where you specify. You're interested in the "Groups and PermissionsGroupsAndPermissions.xml" file.
This is a basic XML file, you need to find the Contributors group and then look at the members node. By default we have @defaultTeam listed. So all we needed to do was to duplicate this line and change the name to the Active Directory name for our security group, e.g. "MyDomain\All Developers".
Once that's saved you just need to Upload through the Template Manager and it'll overwrite your existing template.
|Btrfs - totting up disk usage with subvolumes and snapshots||Friday 3rd October 2014|
I was struggling last night for disk space.|
"df -h" showed my disk nearly full, which seemed insane given the size of it. So I went to use "du" to start narrowing down where it is, my favourite variation is "du --max-depth=1 -h", which gives me a nice total of each directory in the current one.
The numbers didn't add up. This was due to using the BTRFS file system which supports snapshots. When you snapshot a subvolume it appears to duplicate it, but actually just marks a point on the disk. If you don't make any changes to your duplicate it doesn't take up any more disk space. If you do start changing things, it'll only take up space for the changes (even just parts of files).
The "du" command will calculate everything at face-value. Which means, with all the snapshots I had, I was reporting several terabytes of data being used when the disk was 1TB.
My snapshots are done automatically with the rather excellent "snapper" tool which is bundled with openSUSE. It snapshots hourly and stores a configured number of snapshots. Luckily for me it does this in a predictable way and puts everything in a ".snapshots" directory. A small change to our command means we can see where our data usage really is:
du --max-depth=1 --exclude=.snapshots -h .Armed with this command I soon found out that government opendata database I downloaded was still swamping my drive. Deleted!