27
2016
Microsoft’s little decision is a big mistake
Anyone that knows me knows that I have been a supporter and adopter of Microsoft technologies for decades. I prefer not to use the word “fan”, because I’m an active participant, not a passive onlooker. I’ve attended almost every Microsoft developer conference since 1993, back when it was called PDC. I’ve built small and large software on Microsoft technologies my entire career. I built a framework on top of Microsoft VBA. I started a company with a former Microsoft employee. I’ve contributed to Open Source Microsoft projects. I’ve built an e-commerce platform on .NET and IIS that has transacted over $1B and has endured the tests of time. My license plate used to be CEESHRP! My ties to Microsoft have been a big part of my career path. I understand how Microsoft works, and I understand what .NET is and what it’s not. I’ve heard from countless Microsoft naysayers, and defended Microsoft for decades. It’s been a tireless fight – and I’m not ready to give it up.
.NET is often misunderstood. It’s doesn’t only run with “expensive Windows licensing” and it is definitely not a big and slow monolith. It is a mature framework that runs on the big 3 (Windows, OS X, Linux) operating systems and is behind the fantastic cross-platform mobile app product, Xamarin. .NET has been cross-platorm for a while, and with the efforts of Mono, even runs on iOS and Android. .NET is indeed portable.
And recently, Microsoft did something really bold – they decided to open source .NET. From its mature and reliable lineage, .NET Core was born. Finally, the .NET Framework will be truly native, truly open source. The community can contribute to it and make it even better than it already is. This was a big move, a big announcement, and part of the “new Microsoft” that new CEO Satya Nadella has been building. The Microsoft developer community was buzzing with excitement over this. The rest of the developer community – a bit skeptical. After decades of closed source, these transitions take time. They don’t happen overnight. This was the beginning of something really, really right.
Flash forward from the announcement to today, about 18 months later. .NET Core is turning out to be AWESOME. I ported a few of my projects to RC2 easily, and one of the best parts about the experience was this very simple, project.json file that is at the root directory of every project. This file basically tells .NET Core which dependencies to include and how to output the result of a compile. It’s simple, it’s easy to read, and it’s aligned with the way many, many open source projects work. Open Source developers are used to this – a product, and a config file. Config, run, modify config, run again. Get it right, tweak, tune, rinse, test, push, deploy – REPEAT. This is the way we work.
Microsoft built this cool command line interface for the .NET Core Framework – the CLI. It’s what makes .NET Core as simple as:
$ dotnet new
$ dotnet restore
$ dotnet run
These commands use the project.json file and figure out what to do with code. What to do with all the C# and F# files that make up your project. Simple, elegant, and easy to understand and read. Everyone working with .NET Core has gotten used to it. It’s part of what makes .NET Core the “new Microsoft”. I was planning on blogging and extolling the virtues of .NET Core and how fun it was. But then something disappointing happened…
This week, I saw a tweet announcing a new blog post from the .NET Team, and was excited to see what’s new.
Changes to project.jsonhttps://t.co/Lytqwa7ID4
— .NET Team (@dotnet) May 23, 2016
I thought, “Uh, what’s this…?”. In reading this post, I had to consider for a second if this was April 1st and they were joking. I wish it was. So now, instead of doing a deep-dive on .NET Core goodness, I ask Microsoft…
Dear Microsoft,
This is it. This is your chance to go head-to-head with Node.js and all the developers using Javascript and cobbling together libraries. This is your chance to grow .NET into an amazing cross-platform, open ecosystem. This is the opportunity, right now, with .NET Core to build something the community will get behind. This is your chance for a new generation of programmers to be exposed to all the amazing ideas and methodologies you’ve spent decades building. It’s this. It’s this simple, little thing, this ONE file, this root of the beginnings of an idea that often starts with “Hello” and ends with “World”. It’s the thing that keeps it simple – like Go, which doesn’t even require a project file at all.
This decision you announced this week will be perceived as going back to unreadable XML .csproj throws the simplicity out the proverbial window, and will ruin all that you have built with the tooling around .NET Core. It won’t kill it, but it will severely limit it. It will turn people off and they will miss out. Tooling matters – a lot. I imagine that you felt the need to do this because to support complicated projects, the project.json structure would handcuff you. Maybe that’s true, or maybe it’s not, but either way, I see 4 solutions:
Option 1: Keep project.json, work around the complex projects, and make it work. Simple, or complex, depending on the size of the project. One file.
Option 2: Support BOTH project.json and myproject.csproj, and if both exist for the same project, display warnings or errors, and/or have different output assembly root targets for each project type.
Option 3: Stick to this decision to abandon project.json, and watch .NET Core grow slowwwwwly, and watch new frameworks rise up and pass you.
Option 4: Get Forked. The community outside Microsoft decides that this issue is forkworthy, forks the CLI, and preserves what has been built so far for project.json while keeping up with the rest of the runtime changes.
Option 3 is the beginning of the end. You may not see it now, but it will hurt you long-term. It will absolutely fuel the flames of the bad stigma Microsoft has in the open source community.
Option 4 is a question of Why? Why have two factions with different goals? That didn’t work out for io.js and Node.js, and it won’t work here either. It will spark mistrust, resentment, and the product will suffer. I really hope this doesn’t happen.I hope you decide to do the right thing and re-think this. Your long-term respect, adoption, and company growth are at stake. It may seem like a small decision now, but it’s these small decisions that have large impact. Look back on your own incredible history. Look within and you’ll find the answer right in front of you.
Sincerely,
Brett
This tweet should not exist. I hope they make it right.
As Chief Innovation Officer for my company, Onestop Internet, I’m part of a great team of bright people building really amazing and leading edge e-commerce software. And a large part of my role is still being very hands on with our production and development environments, both modifying infrastructure and yes (still, happily) writing code – mostly in C#. When we started the company over 9 years ago, I built our software from the first line of code. Our application stack started then, and still is today built on .NET and SQL Server (and recently, MVC). I’ve always had a Windows machine with 2 displays at my desk. Starting with a machine literally in my garage, then our first warehouse, and as we grew into our 2nd and 3rd warehouses and for the last 4 years, at our multi-building campus in Rancho Dominguez. We recently moved our Marketing and some of our R&D people to our beautiful new suite on the Santa Monica Promenade. After 9+ years of commuting 20+ miles each way on Los Angeles freeways, I’m now riding my bicycle to work along the beach.
I switched from an IBM Thinkpad to my first Apple PowerBook laptop in 2004 and haven’t looked back since. For email, web, photography, and music, I’ve always upgraded and used the latest and greatest Mac laptops as my preferred “always with me” computer. For development, always the best Windows machine with lots of speed and memory. My desk has been configured with plugs and connectors waiting for my Mac laptop to be “docked” next to my always-on Windows box and I switched between them throughout the day. When working remotely while home or traveling, I’ve always VPN’d in and connected to my Windows machine via RDP using Microsoft Remote Desktop Connection for Mac, and in the last few years, in a pinch, connected to the Windows machine from iPad, and once or twice, even from iPhone. This has been the way I’ve worked now every single day, for years.
That is, until last month, when I powered off my Windows machine for the last time.
I am now fully operational, doing everything I need to do in my job using my new MacBook Pro Retina and VMWare Fusion. I traded out my 30″ & 27″ Dell monitors for a single 27″ Thunderbolt display, with HDMI going out to my wall mounted LED display, which is great for meetings and collaborating. Instead of remoting into my PC at my office, I’m now running a local version of Windows using VMWare Fusion. With our new platform development, we’re using Mercurial and Tortoise HG, enabling completely de-centralized development.
After using this setup for a few weeks now, I can’t say enough about how impressed I am with the responsiveness of this setup. The performance I’m getting from this thin powerhouse is amazing. Check out the Windows Experience Index numbers. This is considerably better than what I was getting with my 2 year old, 2 physical processor PC with 32GB of RAM and an SSD boot drive. One word: PHENOMENAL.
My MBP Retina is configured with 16GB of RAM and a 512GB SSD. I allocated 4 CPU Cores and 8GB RAM to my VMWare Guest. That’s it. That’s all that’s needed to get this excellent performance. During normal use of file copying, compiling, running IIS locally, SQL queries locally – all the things you do during development, it’s very rare I see the MBP CPU spike and hear the fans kick in.
I usually run in windowed mode, but often if I’m doing some heavy PC work, I’ll toggle full screen mode. And if I need to display something on my wall mounted LED, I’ll enable the ‘Use All Displays in Full Screen’ mode. The ‘Unity’ mode is also extremely natural, allowing your Windows programs and Mac OS X programs to run along side each other, seamlessly. Although in my anecdotal experience, Unity seems to cause the CPU usage to increase. Copy/Paste is very natural, and I haven’t had any issues or weirdness copying & pasting between programs in either OS. I don’t need the Guest OS for video games, so I disabled the ‘Accelerate 3D Graphics’ option, and that seems to lower the CPU utilization a bit. USB devices such as the Plantronics Wireless Headset I use for Skyping / Lyncing with our off-site Engineers work well and I can connect and disconnect them to the Guest OS easily.
So that’s it. I’m down to one machine for all. It goes with me wherever I go, and I even used it to code a new distributed caching feature last weekend – poolside.
It’s extremely liberating to be able to truly work on a PC on Apple hardware without being tied to the speed of your Internet connection. Remote Desktop Connection served me well over the years, but those days are now officially over. I’m running locally only – and once again, never looking back.
Note: That PC I powered off found a home and has since been re-flashed and re-provisioned for a new Engineer at Onestop; May it serve him well!
Brett Morrison – Official Site
Links
- Brett @ Facebook
- Brett @ Flickr
- Brett @ Github
- Brett @ IMDB
- Brett @ Keybase
- Brett @ LinkedIn
- Brett @ Nostr
- Brett @ X
- Brett via Google
Archives
- November 2022 (1)
- December 2020 (1)
- August 2020 (1)
- March 2020 (1)
- August 2018 (1)
- March 2018 (1)
- May 2016 (1)
- July 2015 (1)
- June 2015 (1)
- March 2014 (1)
- August 2013 (1)
- February 2013 (1)
- November 2012 (1)
- September 2012 (1)
- July 2012 (1)
- October 2011 (1)
- March 2011 (1)
- January 2011 (2)
- December 2010 (1)
- April 2010 (1)