Skip to content

The Original Coder blog Posts

Internalizing knowledge is very useful

The human mind is incredible. It can accomplished great things though conscious thought, but what it can learn to internalize and do automatically is remarkable. As a result learning to internalize software development principals and concepts can allow developers to accomplish much more.

General Examples

Starting from a young age in school we are taught the alphabet, then words, sentances, paragraphs, essays and beyond. There are many details and rules that we are taught over years. Over time these get internalized by the brain and become automatic processes. Years after leaving school when we write something we don’t think about nouns, verbs, adjectives, grammar trees, etc. All of that knowledge has become mostly automatic, so we just sit down and write. By not having to consciously focus on those details it frees up our minds to think about higher level concepts like our goals for the writing, composition, etc.

Later on most people learn to drive. Driving involves following a bunch of rules, doing multiple things simultaneously and being aware of the situation and surroundings moment to moment. Not making mistakes is important because accidents are dangerous. When first starting out driving is really hard and kinda scary. It can feel overwhelming to do all of the things required constantly. But jump ahead after 10+ years of continuous daily driving and its so easy we don’t even pay conscious attention. The mind can internalize everything required for driving so thoroughly that one can drive between home and work and, upon arrival, not even be able to remember anything about the drive. Its like being on auto-pilot.

The human mind is able to internalize all sorts of things if the individual does them frequently for a long period of time. Years after something has been internalized it becomes difficult to consciously recall the rules and facts required by the task that has become automatic. This is beneficial because when things become automatically handled by our subconscious it frees up our conscious to think on a higher level about the task or (such as for driving) to do other unrelated things.

Software Development & Myself

Applying this to myself, I’ve spent so much time doing software development for so long that I’ve internalized a great deal.

I started programming when I was 7 or 8. By 11 I was pretty good; By 14 I was doing really complex, detailed programming. I really enjoyed development so I spent a ton of time learning as much as I could and constantly improving myself. I had no clue, but by the time I graduated high school I was significantly better at software development than most college graduates. From there I went on to tackle bigger, more complex and more unusual projects.

During my early years there were incredibly few books available, and none in normal book stores. If you could find them they had to be mail ordered. Thus I figured out most of the fundamental concepts, patterns, etc. myself before I ever read or heard about them. The “SOLID” principals had not been defined, but I figured them out for myself (except dependency inversion which wasn’t really practical at the time). I learned the concepts of object oriented programming and got very good at it 20+ years ago. I was doing serious, complex architecture work before I ever heard the term “software architecture”.

As a result of all that time and effort I’ve internalized a great deal of software development practices. I haven’t consciously thought about fundamental principals, OOP concepts or the like in many years. I don’t think about fundamental design patterns or optimization either. They have all become obvious to me now and they get applied automatically. As I write code all of that gets baked in automatically. Heck, I even subconsciously optimize my code, implementation and architectures on various levels with little thought. Its truly amazing what the subconscious can be taught to do. Which is great because having all that be subconscious allows me to consciously focus on business requirements, unusual aspects and the big picture.

As a side note, this is what partially (mostly?) inspired my logo.

Extrapolation & Early Development

I’d imagine anyone can learn to do this level of internalization given enough time. This is why spending a lot of time doing software development for a long time really pays off.

I do sometimes wonder if starting at a very early age makes a difference. Apparently being bilingual from a young age causes long-term changes in brain. Source code is kind of a language, I wonder if that has any long-term impact.

Leave a Comment

OOMA & .US Domain Spam

I’ve had OOMA VoIP service for a home phone line for about 3 years.  The quality and cost were excellent, but I wasn’t using it enough and started getting MASSIVE spam marketing calls so cancelled it.  Best cancellation experience I’ve ever had.  They were very nice plus the suggested reselling my hardware, gave me tips and a transfer code to give the new owner 2 months free and 1 year hardware warranty.  Wow.  If you want a VoIP phone get OOMA, they rock.

The reason I’m getting all that massive spam marketing calls is my horrible mistake in registering a .US domain name.  You can’t use a private registration service for .US domains, so your personal information gets associated with the domain name.  Very shortly after that happened I started getting an insane number of calls from Indian web development firms wanting to “help me” with my new domain.  I’m talking like 8-12 calls per day, every day.  It lasted for 2 weeks before I set the VoIP service to reject all calls not on my contact list with a disconnect message.  Turned that off after 2 months and I still get about a call a day!  Insane.

My suggestion: Never, ever, ever, ever register a .US domain name (at least with you real contact information).

Leave a Comment

AMD vs Intel

TL;DR: For two decades I’ve run 5-6 computers 24/7 with AMD CPUs and never had a hardware failure. A couple years ago I tried a top of the line Intel i7 and have had to replace the CPU once and the motherboard twice due to hardware failure. Ouch!

I worked at Intel for 5 years and it was a really good experience. Despite that, I’ve been a fan of AMD CPUs for ~2 decades (including while I was at Intel). The AMD processors always seemed to offer a better price / performance ratio. But now I’ve discovered a much more compelling argument against Intel.

I run a high-end primary workstation, a couple of other PCs and 5 servers, the workstation and servers run 24/7 and are on UPSs. Every couple of years I upgrade my workstation and the old parts (which were high-end at the time) either upgrade one of my servers or create a new server. At present all of the computers I have use AMD CPUs except my workstation.

A few years ago I decided to try Intel so built a new workstation around an Intel i7 4790K and Gigabyte z97x Gaming G1 Wifi Black motherboard. The reason for the “gaming” motherboard is I need 2 PCIe x16 slots for 2 good video cards to drive my 3 monitor setup. Just the CPU and motherboard combined cost around $1,200! The new workstation was very expensive but it did perform well.

Problem is that it wasn’t reliable. It worked reliably for about a year, then the motherboard failed and had to be replaced. Bought a one of the same model and swapped them. The system ran about about another year then the CPU died. So I had to buy a replacement 4790K and swap that. Its been about another year and I’m having motherboard issues again! It will only boot sometimes and it won’t boot with 32GB so I had to downgrade to 16GB. At this point I’ve had to replace both the CPU and motherboard and have just ordered a new motherboard (different brand this time).

The Intel CPU and motherboard were significantly more than AMD equivalents to start with, but having to replace the CPU once and the motherboard twice is ridiculous!

By contrast I literally can’t remember the last time I had an AMD CPU or motherboard fail. I have run some of them 24/7 for nearly 8 years and they never had a single hardware issue. But I try Intel and its failure city!

Regarding my next upgrade, I’m late because I’m waiting for new CPU models that mitigate against Spectre in hardware. I was going to upgrade to an AMD Threadripper but then Spectre happened. The 3rd generation Threadrippers based on the Zen 2 core are due out this year, then I’m upgrading. When I upgrade this Intel hardware IS NOT going into a server, I’m tossing it in the garbage! What a waste of money.

Leave a Comment

Future NuGet Libraries

Anyone who has worked with me knows I’m fairly prolific. I tend to write a lot of reusable library & framework code. I hate writing the same code twice, so most of the code I write is reusable and ends up in libraries. Over the years I’ve built up a rather massive set of my own personal C# libraries.

I’ve started work reorganizing my C# libraries so they can be put up on NuGet. Some code is quite old so I’ll likely modernize it to use newer C# features. I already have some unit tests, but I’ll likely add some more. Putting everything up is a large undertaking but I’ll put pieces up as I go along.

Due to being me, I’ll certainly write new stuff along the way. Which is actually how I got here. As part of my Adventures in .NET Core I started writing a backend framework. I’ve written this sort of thing previously for clients and it makes backend development much quicker, easier and more standardized. But I’ve never written one of my own. Writing it for myself means I don’t have to compromises or worry about deadlines, so this should be fun.

1 Comment

The Future of JavaScript

I’ve been coding for a very long time. I’ve seen languages and industry paradigms come and go. JavaScript is all the rage currently, its even getting into the backend; But a decade from now it will be fading fast if not completely gone.

In the early years there was BASIC. It was everywhere and everyone knew it. It was designed to teaching programming concepts and not for wide spread production. Because it was so popular it was adopted for writing business applications and some sizable, complex systems were written in it. Those systems were hard to maintain because BASIC allowed programmers to write very messy code with little structure and many did. The term “spaghetti code” came to describe the very many BASIC applications who’s code was a complete mess. But because it was so popular a lot of programmers thought it was the end-all-be-all language. They even clung to it (aka Visual Basic) as the house of BASIC fell.

Then there was C. The language had been around but it took awhile for it to take hold in the world of personal computers. C was designed for low level system programming, was terse with few constraints on structure and had no type safety at all. It was never intended for widespread use as an application programming language. Yet it became the go-to language for writing large, complex business applications. Those systems were hard to maintain because C programmers used the lack of constraints to write messy code or code that was so “clever” others couldn’t understand it. Lint was created as an attempt to deal with some of these issues. Yet many programmers believed it was the be-all-end-all language.

Then there was C++. It was created to address some of C’s shortcomings and problems. It was an evolution of C so programmers mostly accepted it and moved over. C++ attempted to be better for application development by adding more structure and constraints to the language. But in trying to remain mostly compatible with C it still allowed for spaghetti code with very little type safety. Messy and very “clever” code was still pervasive and so Linters became even more important. Many programmers believed it was the be-all-end-all language. Some even clung to it (still do) as C++ was replaced by more modern languages.

Now there is JavaScript. In the beginning no one took it seriously, because it wasn’t intended for doing serious work. But it ended up in every web browsers and as the web became more important for business many developers were forced to use it. This made it “popular” and many libraries and frameworks were created to try and make it more suitable for business applications. Complex and sizable business applications are now written in it. Those systems are hard to maintain and typically get completely rewritten every few years to support a newly popular framework. Because it is so widely used programmers who have spent a lot of time in it are starting to believe it is the end-all-be-all language. It is even being used on the back end (Node.js) to write even larger, more complex systems.

See a trend? Very few programmers work professionally in BASIC or C today. Programmers used to more modern languages would likely find them antiquated and lacking if forced to use them. But at the time they were heralded as being great.

The writing is now on the wall that JavaScript will be joining them and fading away. That writing is Web Assembly. For the past couple of decades developers were literally forced to work in JavaScript because it was the only thing that ran client side on browsers. There was no choice. So a huge community has sprung up to support it. BASIC, C and C++ also had massive communities at their peek.

Once Web Assembly is mature and broadly adopted it will become possible to write client-side browser applications in many different languages. Web applications using Web Assembly will download and run faster on the browser than JavaScript. That alone would kill JavaScript client side. But many of those other languages will also be more modern and intended for writing business applications making them more productive and maintainable.

Some programmers will cling to JavaScript but most will end up moving to other languages. Once client-side JavaScript is dead server side will quickly follow and JavaScript will slowly fade away.

Its worth noting that there is a significant difference between BASIC, C/C++ and JavaScript though. BASIC was designed to teach programming concepts to students and it is still workable for that purpose. C/C++ were intended for low level systems programming (OS kernels, device drivers, embedded hardware, etc) where it is still used today and makes sense. JavaScript was created just to be used client side on web browsers. Once it is replaced in that domain its purpose to exist will be gone. Once it is no longer used in web browsers I expect it will completely disappar with no one using it for anything. It will just be a footnote in the history of the web.

Leave a Comment

First impression of EF Core is very poor

Based on my experience with prior Framework versions of Entity Framework I set off to implement the new EF Core version using Database First. What I ended up finding is nothing like the experience I expected. This was the most frustrating experience I’ve had with Microsoft development tools in quite a long while.

TLDR: If you’re trying to do Database First with EF Core go get the EFCorePowerTools extension. This is what Microsoft should have provided and it will save you a lot of time and grief.

The .NET Framework versions of EF were both powerful and user friendly. Plus they gave the developer lots of options and capabilities so that they could use it the way that best made sense for the project. Tasks like generating or updating the data classes were very easy to accomplish using GUI windows. Sure the EDMX file could (rarely) get corrupted, but you are using version control, right? Just pull a known good version.

My goals for this project:

  • Use Database First because Code First doesn’t generate quality (DBA approved) schema unless you put in a lot of effort to hand code SQL DDL statements in the C# code. That’s a nightmare, so no.
  • Put the POCO classes in a .NET Standard library with minimal dependances so that they can be reused if ever needed (plus this keeps things clean)
  • Create a separate .NET Core library to contain all the Entity Framework specific code for implementing the data repository including the DbContext.
  • Reference both of those libraries from an ASP.NET Core MVC project to verify it works.

Seemed simple enough. I’ve done this numerous times with the old non-core versions of Entity Framework and it was easy. To get started I searched the Internet for instructions on how to do Database First with EF Core and I found quite a few walkthroughs. Unfortunately, none of them worked.

First problem I found is some of the walkthroughs assumed you were targeting an ASP.NET Core MVC project so they skipped steps. Found others that targeted a library but the first couple I tried failed because the list of EF Core packages has changed someone in the recent past. Finally got the library ready which included adding design and tool packages to the library because the POCO generator requires them. This library will end up getting used in production, it should not have dependencies on developer tooling and aids. Sloppy.

To generate the POCOs you have to execute a Scaffold-DbContext command in the Package Manager Console. This is not obvious and not user friendly. It also doesn’t work. I spent at least an hour and a half trying it with different arguemnts or even via the Command Prompt along with more Internet searching. It seems lots of developers have trouble with this because there are lots of pages dealing with various errors. Oh and there were multiple errors, every time I thought I fixed one thing a different error came up. It simply refused to work.

Then I got very lucky! Buried down in the comments on one of the many web pages was a link to the EFCorePowerTools Visual Studio extension. It adds GUI support for doing common tasks related to EF Core. One of those tasks is generating POCOs. Plus since its part of the GUI it doesn’t care what packages are added to the target library, so no need to add design and tooling to production libraries! Installed the extension, restarted Visual Studio and within 2 minutes I had the POCOs generated no problem.

Its worth mentioning that EFCorePowerTools is not from Microsoft! We have Erik Ejlskov Jensen to thank. Why in the world Microsoft didn’t do this themselves is beyond me. Very poor.

Before finding that extension I was keeping notes so I could write up my own web page on how to try and deal with Scaffold-DbContext. But after finding the extension I realized who cares and tossed them. Time will tell if it has any maintenance shortcomings, but for now it makes life so much better.

Leave a Comment

JavaScript == Spaghetti Western

The world of JavaScript is the like the Wild West in that there are few constraints and nearly anything goes. Some frameworks are trying to make it better, but regardless a whole lot of JavaScript I’ve seen is Spaghetti_code (a mess).

Random thought that I had while I was writing something else. I hadn’t heard the term “Spaghetti code” in years but it popped into my head when thinking about JavaScript.

Leave a Comment

Adventures in .NET Core

I’ve been slowly moving towards .NET Core for awhile and have decided now is a good time to try and take the full plunge.

Initially it was just too new and incomplete to be of use. When .NET Standard reached v2 I started writing most library code in that instead of .NET Framework. By targeting .NET Standard those libraries could then be used by not only .NET Framework and .NET Core applications but also Mono, Xamarin, etc. Plus there are some other advantages to the way .NET Standard library projects work that I won’t go into here. But beyond libraries I only dabbled in .NET Core applications mainly because of Entity Framework.

Entity Framework has become an awesome, almost indispensable ORM in no small part due to the integration of LINQ. Entity Framework 6 is very capable and mature; It works. Entity Framework Core is a complete rewrite and lacks some of the capabilities present in EF6. Plus I personally find the emphasis on Code First and at best limited support for database first rather alarming. As someone with strong experience in both Software Engineering and Database Administration I’ve always suspected Code First would lead to compromises in the database. EDMX files may not be fun but they allow for a very productive workflow and don’t require any compromises.

Given the above and the fact that .NET Core has reached a good level of maturity I’ve decided it is time to set off on an adventure to explore what it can do, can’t do and how it works. In addition to .NET Core I’ll use this adventure to also try out some of the other new Microsoft technologies such as Xamarin.

To do this I’m going to come up with a small but meaningful (non-trivial) application to build. I’ll use SQL Server 2017 for database storage and make sure that the database schema includes important capabilities such as keys, indexes, GUIDs, etc. On top of that I’ll use ASP.NET MVC Core to build a RESTful API backend which will provide all the capabilities clients require. Then I’ll build multiple client applications using various technologies that access the backend API.

Now that Microsoft is finally playing nice with others there is literally a myriad of client possibilities! Just in the realm of web applications there is an almost daunting list of possibilities:

  • Old school ASP.NET Core with MVC, Razor, Bootstrap and jQuery
  • ASP.NET Core with Angular 2
  • ASP.NET Core with Knockout
  • ASP.NET Core with React
  • ASP.NET Core with React + Redux
  • Non-core ASP.NET MVC with Razor, Bootstrap and jQuery (for comparison)
  • Non-core ASP.NET Web Forms since they are still used in some older systems

There is also an impressive list of non-web client possibilities:

  • WinForms
  • Windows Presentation Foundation (WPF)
  • Universal Windows Platform
  • Android using Xamarin
  • iOS using Xamarin
  • Mac using Xamarin
  • Unity if I wanted to dabble in the world of game programming
  • Maybe even a Linux client using Mono

It would be really interesting (and fun) to build all of those and then compare them. That would be awesome, but in reality I probably don’t have that much time to spend on this. At least for now, maybe I’ll keep this going and get to all of them eventually.

Regarding web applications Angular 2, React, and Knockout apps on ASP.NET Core is a really interesting and helpful blog post by Steve Sanderson.
by

I’ll detail my journey through the world of .NET Core in upcoming posts.

Leave a Comment

What is the difference between Application Programming vs System Programming?

Low level infrastructure software (aka system programming) is what makes up the foundation of operating systems and development platforms. This type of software requires machine code that runs directly on the CPU and can effectively communicate directly with the various hardware components. The end users of software built using system programming is mostly technical people and other programmers.

The vast majority of developers working today write applicaiton code. Application programming is the practice of building software that runs on an operating system or runtime platform. Microsoft Windows, Android, iOS and web browsers are examples of platforms that applications are built on. Application code doesn’t interface directly with hardware, instead it relies upon the OS / platform to provide all the required services. Applications are built for end users and business people who may possess little if any technical knowledge.

Leave a Comment
Site and all contents Copyright © 2019 James B. Higgins. All Rights Reserved.