Atlanta’s Pocket PC User Group

Show and Tell of expensive toys could sum this evening up. There were only seven developers
among the seventy or so attendees. The phrases Compact Framework or even .Net were
not mentioned once!

Still it was a fun evening, I got to live vicariously though other people’s high dollar
purchases (like I’d blow $800 on a PDA, do you know what kind of wheels $800 would
buy? Nice ones).
Unbelievably some rivaled Michael Earls on
the amount-of-tech-in-your-everyday-life front. It seemed like everyone there had
Bluetooth capable devices which talked to their car audio system + GPS receiver.

Obviously there is quite lot to learn about mobile devices apart from just programming
them; my favorite was the new nick name for the BlackBerry: CrackBerry –
apparently checking Email becomes an addiction creating a Pavlovian response
from its incoming email alert. For you single guys: If you fly often it appears a BlackBerry will
pick up more chicks than a PocketPC equivalent. My $200
Dell Axim probably means Jocks will be stand in line to
kick my nerdy ass.

The main speaker was Dale Coffing, he
showed off some cool products including a pair of kakis that I’ll be buying. The SCOTTeVEST pants
have eleven hidden pockets and compartments – ideal for stowing a PDA and blunt metal
objects discretely (in case of attack by Jocks in Airport lounges). SCOTTeVEST also
sell Jackets… get ready to salivate… Their jackets have up to 42 hidden pockets all
begging to contain an expensive gadget, the jacket even has hidden cable routing to
link any oh-that’s-so-90’s wired gadgets together. And just think of the new people
you’ll meet at Airport security. The Jackets are here:

http://www.scottevest.com/v3_product_info/features.shtml

If anyone knows of an

Atlanta

based Compact Framework Support User Group please let me know.

Posted in Other | Comments Off on Atlanta’s Pocket PC User Group

Evaluated Node.js, moving 100% to JavaScript – am backing off for a while

It has been fun. Following a month ramping-up on JavaScript, AngularJS, Node.js and Git conclusions are:

  • AngularJS looks great
  • Hold off on Node
  • Keep JavaScript close, but not a best buddy
  • JavaScript transcompilers are promising
  • Git is really easy to install/ use; embrace over SVN for disconnected commits/ simplicity

Surprisingly easy to get up and running with Node

It is surprisingly easy to get up and running with node, see the below screenshot for what took a little over an hour after deciding to install Ubuntu and use Eclipse as an IDE for node development:

Git is trivial to install on Windows/ Linux. It took minutes to create a Git repository in Dropbox.

What is Node.js

Node has a lot of what you are already used to. For almost everything we can do in .Net there is a corresponding Node package. Socket IO, http communication, async, MySQL provider and so on. These are called modules in Node; installed trivially using the Node Package Manager (npm) via a terminal prompt. Many popular JavaScript libraries are also available as Node Modules: Underscore, Mocha and CoffeeScript being particularly popular.

Node literally has one thread and an event loop which cycles through pending events. This means any part of your codebase can block other requests. A major difference is the style of coding in Node. ASP.Net etc are implicitly multi-threaded, pre-empting threads to ensure each server request gets its share of CPU time. Node code must be crafted in a manner so-as nothing blocks a thread.

While learning Node, much tutorial code required piping streams and nesting JavaScript callbacks. Apparently most Node code is like this. Such code soon becomes difficult to follow and comprehend. With familiarity this will improve, but well crafted OO code will always be easier to understand.

Why Node?

JavaScript is everywhere; many developers know JavaScript so why not use it server-side too?

We write validation logic in C# on the server and JavaScript in the browser. Using Node we can reuse the code.

Performance is a huge seller. Apparently Node.js is bad-ass rock star tech that can blow ASP.Net etc out of the water performance-wise. Let me debunk this: performance is an area where I really kick-ass; have tuned many systems (small and large) generally seeing ~100->400 times improvement under load with surprisingly minimal tweaks. Most were systems that had already been tuned.

Performance is a function of your developers and/ or having someone on staff who understands performance holistically. Do not select a technology because its theoretical maximum load is 20% higher. At a fortune 10 I tuned two maxed out Datacenter installs (~thirty machines each) to all the machines using almost zero CPU. Undoubtedly several people spent weeks or months analyzing which machines to buy for ‘peak performance’. Architecture, sensible implementation and tuning are where real performance gains are found.

Performance of Node can be killed by any one bad section of code. Tools to tune Node are very immature. With .Net we use WinDbg/ sos.dll to analyze production systems – it is very difficult to analyze Node in production.

JavaScript is the Future?

As many tech friends said: on my third read of JavaScript the Good Parts it really made sense. Quality coding can be achieved in JavaScript but it is far from a perfect language.

Google’s Dart, Microsoft’s TypeScript and CoffeeScript all bring real OO concepts including classes and even static typing to JavaScript. Currently they transcompile to JavaScript. Within five years expect a language in this category to have gained traction + adopted into all browsers. Current versions of all browsers self-update.. Once most of the world is running self-updating browsers it becomes possible for new standards to roll out quickly. Powers that be in the Internet world will settle on a standard; that is why Microsoft threw TypeScript into the ring.

JavaScript was cobbled together quickly in 1994 as a scripting language for Netscape. It is very weak and will eventually be ousted. Transcompiling is an intermediary step.

Final Conclusions/ Predictions

Node.js is hot today, but not a good fit for the kind of applications I personally work on: large systems with a traditional RDMS back-end and a lot of inter-system messaging to slow legacy systems.

Node.js is helping building a great base of JavaScript’s frameworks for the enterprise. Be-most are from small untrusted sources. It is only a matter of time until a serious security breach occurs via someone slipping malicious code into an open source JavaScript library. Once a high profile incident occurs the JavaScript community will figure out how to mitigate such attacks.

A Node.js rival with multithreading will appear, or Node itself will be extended. Ruby gained multithreading; after years of its user base stating single threaded web servers are fine.

JavaScript will morph into a real language within five years.

Posted in Uncategorized | Comments Off on Evaluated Node.js, moving 100% to JavaScript – am backing off for a while

Technology Fragmentation is a Project Killer

Project failure is very real

Standish Group research shows ‘Of 3,555 projects from 2003 to 2012 that had labor costs of at least $10 million, only 6.4% were successful‘. 41% were total failures and the rest vastly over-budget or did not meet expectations.

tl;dr “This article summarizes my twenty-five years in the Software Development industry. Project failure is commonplace today, Technology Fragmentation is a key cause.”

1989 -> 1999: One Project Failure – mostly PowerBuilder/ Oracle

From a summer engineer in 1989 to leading a project rescue in 1999 only one project failed. Seriously. (Only failure was with a Big 5 IT consultancy, staffed with mostly non-technical resources who did not want to be on that particular project).

All projects from 1994->1999 were PowerBuilder/ Oracle. Many people became experts in them and there were no additional frameworks. Virtually every project used similar architectural techniques.

During this period a small percentage of individuals appeared essential to some projects. At a minimum some hands-on developers undoubtedly shaved man-years off project costs. At least twice I saw several contractors almost certainly rescue a project; for the most part they did this by training/ leveraging existing staff, not super-human 80+ hour week coding.

1999->2001: Fought off Failures – Java + major Frameworks:

Technically I had only one outright failure with Java; when I opted not to burn-out and resigned from an insanely well paying contractThey burned through ~$10m in six months.

On the first two Java projects I averaged ~80 hours per week, exceeded 100 hours per week over one seven week period and even had some 24 hour days. I was the main technical resource, unfortunately hired late in their SDLCs. Developers lack of knowledge with the entire technology stack was a core issue. This required me to learn quickly and work unsustainable hours to stabilize projects/ educate others.

On these successful projects several developers followed close behind my lead – it was a team effort to succeed against the unknown technologies, but only between ~20->30% of the team in each case. A significant problem was the number of technologies to learn.

With PowerBuilder/ Oracle even the weakest team members were somewhat competent in one of the two technologies, and they generally soon improved (having lots of other people to learn from). With Java and its increasing number of frameworks/ app-servers/ etc it was not uncommon for a project to only have one expert per framework/ tool. This meant several people became critical to project success. If they had bluffed their way through the interview their area was a ticking time-bomb. With a plethora of frameworks it is very hard to technically screen all candidates; unless an expert is used for interviews bluffers can be very hard to weed out.

Some Stability with .Net: 2002-> 2007

My first .Net project failed outright, but was my only outright failure with .Net until ~2012. No one understood the technology on our first project; about a year in I was making great breakthroughs, solving most of the long standing issues. Unfortunately due to missing long-term deadlines our strong manager was ousted was replaced with a ‘yes-man’; we disagreed and he soon ousted me. That project failed within a year and I received several supportive emails from client staff. Approaching $10m dollars of tax-payer money was wasted, subsequently I have read many news stories slamming IT at that major branch of the Government (employs ~300,000 people).

Once up to speed with .Net virtually every project called me a ‘superstar’, ‘insanely productive’ etc and I did not see a single failure.  Unfortunately there was plenty of evening and weekend work to extinguish fires/ meet deadlines.

Why were these projects all successful? We knew the entire technology stack. In particular I knew .Net and Oracle/ SQL Server very well; this enabled extinguishing fires quickly and permitted time to educate their developers. Many were ‘Over-the-Top’ thankful to me for taking the time to assist them (I was glad to help/educate!).

Stress with .Net: 2007 -> Present

By 2007 I still had no failed .Net projects where I had control but most were stressful; typically from overwhelming amounts of evening/ weekend work.

In ~2007 the .Net market really exploded bringing three major issues:

  • Quantity of frameworks sky-rocketed
  • Quality of frameworks reduced
  • Hiring quality people became became harder

By 2007 most projects I arrived at had a fair number of frameworks/ tooling: Microsoft Patterns and Practices, ASP.Net Membership Provider, Log4Net and NUnit were particularly popular. As time progressed ORMs came into the mix; I have worked with at least five ORMs so far. Templating/ code generation frameworks and IDE plug-ins like Re-Sharper were also popular.

Few new technologies save time in the short term. Most frameworks/tools only reduce time/money/complexity the second or third time used. This is a well documented fact. A rule of mine has to been to never use more than two new technologies on a single project. Learning curves and risks are just too great.

~2007 -> 2012 was my ‘Project-Rescue Phase’. The majority required enormous effort to understand the technology stack and generally battles with managers/ architects to stabilize. Typically removing unnecessary technologies and performing major refactorings to simplify code/ make it testable via Continuous Integration.

On my final ‘project-rescue’ contract: within three weeks we met a deadline despite the team having zero to show for the previous three months. Their Solution Architect left them a high level design document littered with fashionable buzzwords; nothing useful had been produced. Two previous architects made no progress; including the original solution architect. I was their forth architect in about three months. One other developer and I began from scratch coding everything to meet phase 1 in three weeks; the other five people did little but heckle. The consulting company I assisted was still being difficult so I left them to it; they lost that client. Being fair they put something into production, but it took three times than longer and was a terrible product. They burned through three more architects during that time.

Performance Tuning: 2010

Around 2010 I advertised for and landed about ten performance tuning/ defect fixing contracts. Massive stress but great intellectual challenges fixing issues customers could not squash. I had a 100% success rate with these, taking a maximum of four days 🙂

An opportunity to observe many systems over a short period. Unnecessary Complexity was a constant. Frameworks and trendy development techniques being primary offenders. One customer had a mix of Reflection and .Net Remoting that hindered debugging most of their code base for years. I removed that problem in ~twenty minutes which stunned their coders – they were amazed with wide open eyes and gaping mouths cartoon style 🙂 [Topic-change: this is where experience counts and that minuscule piece of work was in the 100x Developer zone. Such times are rare, do not let anyone tell you that they are consistently a 10x developer.]

100% Failed .Net Projects: 2012 -> Present

2012 forward I decided to stop the ‘workaholic rescue’ thing and try to talk sense into managers/ architects/ stakeholders. This was ineffective. Two projects failed outright and the third is a Death-March. Stakeholders believe all is fine as they ‘tick-off progress boxes’ but reality is a long way from their current perception. [Update Feb 2014: They are now at least $50m over budget]

Two of these projects suffered from ‘resume-driven architectures’; the other wishful-thinking timescale-wise, but they hired about two hundred Indian contractors to compensate which always works (not). I was tempted to give each member of the leadership team three copies of the Mythical Man-Month; three copies so they could read it three times faster.

Quantity Up/ Quality Down for additional Frameworks and Tooling:

From 1989 -> ~1998 the number of technologies was modest.

About 1999 the Internet-Effect really began. Ideas took center-stage in the early days. Certainly in the Java world many were rushing to use latest techniques they had just read about online: EJBs, J2EE, Distributed Processing, Design Patterns, UML etc… Most teams were crippled by senior staff spending their time in these trendy areas rather than focusing on business needs (aka meta-work). This coincides with the beginning of my ‘project rescues’ and being told way too often that I was “hyper-productive compared to the rest of the team”.

By ~2007 open source frameworks and tools were center-stage in most projects. Since then we have seen exponential growth and large companies (Sun, Microsoft, even Google) proliferated their dev spaces with low quality framework/tool after low quality framework/tool presumably hoping some would stick. Apple is one of the few to exercise real restraint. The Patterns and Practices group within Microsoft is particularly shameful example most of us are familiar with.

Over Twenty Languages/Frameworks/ Tools is now common?

It is tough to single out one project, but below I quickly listed forty-two basic technologies/ core concepts a sub-project at one company used:

“VS 2012, .Net 4+, html5, css3, JavaScript, SPA/ MVC concepts, Backbone, Marionette, Underscore, QUnit, Sinon, Blanket, RequireJS, JSLint, JQuery, Bootstrap, SQL Server, SSIS, ASP.Net Web API, OData, WCF, Pure REST, JSON, EF5, EDMX, Moq, Unity DI, Linq, Agile/Scrum, Rallydev, TFS Source Control, TFS Build, MSBuild scripts, TFS Work Item Management, TFS Server 2010/ 2012, ADSI, SSO, IIS, FxCop, StyleCop, Design Patterns (classic and web)”

Beyond the buzzwords virtually every area of the application was custom implemented/ modified framework rather than taking standard approaches. It was certainly job-security and helped lock out anyone new to team.

That is just one of three projects since 2012 where I “complained three times, was not listened to so walked away as calmly as possible rather than fight like I used to”.

Resume-driven Architectures / Ego-driven Development

Why do most contemporary projects so many frameworks and tools? I see three key drivers that happen during the Solution Architecture phase:

  • Strong External Influence
  • Resume Driven Architectures
  • Ego Driven Development

Strong External Influence is a key driver. SOA appearing on magazine covers, Microsoft MVPs all singing the same tune etc. Let’s look at how these work.. What appears on magazine covers is driven by the major advertisers. SOA sold more servers, hence it was pushed on us. Many friends are MVPs so I must take care in explaining them: many try hard to stay independent but most are influenced but their MVP sponsor to publish material around certain topic. Over the years I have seen many lose their MVP status and generally it was after outbursts against Microsoft, or they stopped producing material that Microsoft Marketing wishes to see. Apologies to MVPs, but you all know this is the truth.

Resume Driven Architectures is the Solution Architect desiring certain buzzwords on his/her resume to boost their own career, and/or being insecure about finding their next role without the latest buzzwords. On one project I had to leave early the Solution Architect mandated an ESB for a company with under 2,000 employees! Insanity. Of course they failed outright, but not before going three times over schedule and having almost 200% turnover in their contract positions during a six month period! It is not fair to single out any one individual; we have all seen a platoon sized number of such people over our careers.

Ego Driven Development: Bad managers tend to compare ‘number of direct reports’ when trying to impress one-another. Bad architects do the same thing, just with latest buzzwords.

What needs to happen?

With fewer technologies one or two key players can learn them all and stabilize a project. This is not feasible with over forty technologies.

Already in the JavaScript community we are seeing a backlash against needing large numbers of frameworks, but this is causing further fragmentation. A core concept of AngularJS (and others) is to not rely on a plethora of other frameworks. Of course early stages of learning ‘stand-alone’ frameworks like AngularJS are tough. Frameworks generally do not save time until the second or third project we use them. We could learn AngularJS but what if our next project does not use it? Time wasted, likely no efficiency gained.

No-brainier: Reduce additional frameworks

Doh, virtually all of us realize this! The problem is how? Personally I am researching Angular and Node with the intent of waiting until a suitable position appears. This approach vastly limits the projects we can work on. [Update: I ended up taking a Jenkins/ CI stabilization job close to home but eventually did get use Angular there.]

Personal experience shows that once a project with a vast number of frameworks is given the green light it takes a Herculean Effort to even tweak its Solution Architecture. As independent contractors we can avoid clearly-crazy projects including those loaded with buzzwordsUnfortunately that limits the projects we can work on. It may keep us sane though; I believe:

“Beyond two new technologies project success is inversely proportional to their combined complexity”

Ensure the Solution Architect is Hands-on

I used to believe this was a perfect solution, if only Architects had to build what they preached then they would constrain Architectures to the bounds of reality. Unfortunately this appears not to be universally the case; perhaps it reigns them in a small amount? I have witnessed more than one case of someone attempt to implement their own bizarre architecture.

In the worst case having to implement themselves, must reign in an Architect’s craziest ideas. Personally I shy away from pure architecture, especially above multiple teams. Before now when under pressure I have resorted to semi-bluffing and palming quickly cobbled up ideas off onto others safe in the knowledge I did not have to implement. I soon became cognizant of what I was doing and brought it to a swift halt. Do you think others will be so honest? Let’s try to ensure Architects are on implementation teams.

Ignore vendor influence

Guy Kawasaki has a lot to answer for. IT vendors have long tried to sell us what we do not need, but Guy Kawasaki introduced many techniques we see today.

Attended a free conference or user group with quality speakers lately? Receive free trade magazines? Java ones are funded by larger players in that space, Microsoft ones often by (indirectly) by themselves. There is great value in these resources, but please keep your eyes open for manipulation.

SOA is a primary one I refer too. Magazine after magazine had SOA emblazoned their front covers, many conference talks were around SOA.. it became the buzzword du jour for years. SOA used conservatively is fantastic, but from about 2002->2010 I saw project after project with SOA sprinkled around as if salt from a large salt-shaker. Re-factoring to remove/ short-circuit SOA was a key technique of mine – ‘strangely’ removing much of it led to much maintainable and performant code. Why was SOA so heavily hyped by our industry? Distributing code leads to more servers; which increases hardware sales and more importantly server license salesServer license sales are where the big-players make their real money. Costs of even a smaller companies SQL Server or Oracle licensing soon ring up to millions-of-dollars. High costs accompany CRM, ERP, TFS, Sharepoint and most other common sever based software.

Younger Architects are particularity susceptible to vendor influence. Younger people are more easily influenced, tempted by implicit promises etc and soon saddle their projects with many trendy buzzwords. How could a project possibly fail if every buzzword is hot on Reddit and our vendor representatives cannot stop talking about them?

Embedding consultants/ evangelists into large companies is very common. Received free conference tickets from a vendors? Free training and elite certifications? Sorry to lift the curtain, but clearly these are tricks which exist to coerce you into using a particular technologies.. and buy more severs! The consultants and evangelist are of course generally not evil, but they are trained to believe in what they are selling.

Become a Solution Architect

Being the Architect certainly works. Every project I had significant control over was a tremendous success. Unfortunately most projects select their Architects based on popularity with management and other non-technical attributes.

Frequently senior leadership believes Solution Architects should manage multiple projects and not be hands-on with code. This is a mistake. Personally I have turned this role down several times as it leads to poor Architectures – as stated above I have caught myself bluffing before in this role. Solution Architects should not span multiple projects. Staying hand-off for long leads to believing marketing of technologies. Marketing is often far from reality.

Most companies in-house Architects tend not to be the strongest technically. Senior leadership looks for softer skills – can they convince/ bully others, have a large physical presence etc. Notice how often we see tall white male Solution Architects? ~80%+ of the time yes? When this is not the case the Architect is almost always technically sound – because they attained the position on technical merit. All too often leadership looks for someone ‘with weight to throw around’ – at the Fortune 10 discussed above virtually all ‘thought leaders’ in our department had a large physical presence, and shouting at subordinates was second nature. It was amusing when I was asked to review the work of the two worst because so many people were complaining about them.

Conclusions

Hopefully it is clear that we must reduce the number of technologies in our projects. For the foreseeable future it is unlikely we can return to the stability/ predictability of pre-Internet/ tech-boom days.

This post is far longer than my notes/ original outline predicted. In future I will partition posts into more digestible subtopics with more focus on how we can improve. There is good information here, so under time-pressure I decided to publish as-is. Being between contracts Angular and Node are calling my name. These two appear the most likely to emerge as victors from current JavaScript framework fragmentation.

Posted in Uncategorized | Comments Off on Technology Fragmentation is a Project Killer

10x Developers and 10x Projects

In addition to Scott’s and many, many other 10x Developer posts here are opinions from someone labeled ‘hyper-productive’ on most projects.

Key Points

  • 10x is a zone we enter at times, it cannot be sustained
  • 10x has historically (in Mythical Man Month etc) meant the difference between worst and best developers
  • Being 10x better than average is a rare occurrence with short duration
  • 10x zone is only achieved after months or years perfecting required skills/ preparing conditions for the highly productive period to commence
  • Many important tasks take very similar amount of times regardless of the individual – e.g. attending the daily stand up, liaising with another department, manual testing etc.

Common Characteristics of being ‘in the 10x zone’

  • Task that does not lend itself to parallelism – efficiency is gained from reducing communication etc.
  • Full stack developer avoids need to interface with knowledge silos – this is a common with hyper-productive developers but often leads to resentment from the team unless great care/preparation is taken to handle political backlash
  • Using a library/ technique highly suited to task at hand – a great example is deserializing/ serializing xml by hand vs. employing a tool like xsd.exe or JAXB. Many thousands of lines vs. a trivial library call can lead to man-months of saved effort
  • Already familiar with business/ technical problem – second time around is always faster. By the third time most problems are solved many times faster
  • Bust through politics – obtain access to all systems, and are shielded from political backlash of stepping on toes

10x Projects are a far Bigger Deal

Over the years I have seen around fifty projects. Excluding the really crazy ones 5->10x is a rough measure for the difference in productivity we see at the project level. More if operational problem are factored in; too many systems are moved to production before being fully stable. Many projects fail, so technically there is an infinite difference in productivity. 5-10x is a rough approximation between ones that make it to production.

Common Characteristics of very inefficient Projects

  • Too many people – and/or too many of the wrong people
  • Cumbersome, un-enjoyable process
  • Developers treated as commodities – no praise for quality work
  • Rude, non-technical leadership
  • Best developers flee the project

Anyone with a few software development projects under their belt knows the main issues that lead to poor projects. Unfortunately after all my years in software development I am now of the opinion that many are inefficient intentionally by design. Yes, the leadership team actually desires this! Many managers long for very large teams. Root cause appears to be Empire Building; a larger team brings the manager/director more power. Many resist promoting others that could potentially challenge them, and several times I have witnessed life being made difficult for quality individuals with the sole intention of forcing them off the project. As time progresses leadership weakens and it can decimate large companies, especially during tough economic times.

Conclusions (tl;dr)

Obviously there is a massive productivity difference between the best and worst developers. That does not mean someone highly productive on Project A will immediately be highly productive on Project B.

A badly run project can cripple even the best individual’s ability to do great work. This is of far greater importance than bickering about if one developer is actually 10x or not. One great developer is just one great developer with a limited skill-set. The team wins the war, great developers are often decisive in key battles but they cannot win wars alone.

Developers have specialties. Personally I still struggle until up to speed with a new technology – it takes time and hard work to be ‘10x’ again with new technologies.

Posted in Uncategorized | Comments Off on 10x Developers and 10x Projects

Should I buy a 3D Printer?

Thinking of buying a 3D Printer? Let’s give that thought a quick reality-check… I was ready for a somewhat hobbyist experience but it took way longer than expected to have the printer assembled and printing acceptably.

Mine is a $549 Printrbot Kit. Interestingly perusing forums of the current generation $2,200 MakerBot Replicator 2 showed their owners experience similar issues to what I’ve gone through.

Assembly of the kit took five to six hours. Until it printed reliably using PLA was probably a further forty hours but I did fabricate a new platform and adjustable bed. If time permits I’ll follow this post up with common issues + solutions so maybe you’ll be up and running in under twenty hours. Either way, expect to become a 3D Printer technician and be sure to have an abundance of patience handy.

The image below sums up the experience nicely. See the tools? Notice the spool holder made from DIY parts? Surprised to see DIY screws, power drill etc nearby? Well… forums and blog posts are littered with people who purchased a 3D printer and abandoned it before making good prints. Surf some forums and you’ll notice virtually everyone showing off their prints has their printer in some kind of hobbyist workshop; typically they’ll have loads of tools around and are veterans of past fabrication/ advanced DIY projects.

But apparently a child can build one? Yes, some manufacturers are suggesting you buy one for your eight year old and he’ll have it assembled and printing in no time… Not a chance! I did see one blog post where a young boy had assembled the kit but “so far they were having trouble printing”. Younger than twelve I’d say to totally forget it and buy him/her a Lego Mindstorms or similar. High school age is probably more appropriate; even then a tinkerer Father on hand is almost a necessity.

Should I spend $550 or $2,500?

Now my $549 Printrbot is dialed-in and has had a few modifications its prints are excellent. Given the explosion in 3D Printer popularity, anything built in 2013 is going to look like a Dinosaur in a few years. Unless you need to print very large objects now I recommend buying a lower priced printer today and upgrading in a couple of years when mass production techniques should bring costs way down and take quality/ ease of use way up. I purchased Make: Ultimate Guide to 3D Printing for $6 as they reviewed about fifteen current generation printers.

Should I buy a Kit or Fully Assembled?

Assembly is typically only ~$100 extra, but the experience [frustration!] of building your first printer is invaluable. I was too young to build my first home computer in 1981 so was bought an assembled one. I did not make that mistake this time; the kit came with poor/incorrect instructions, but it was the correct route to take for both experience and bragging rights in twenty years time.

What to expect from a Kit?

Until the last couple of years most 3D Printers were built from open source online designs. Hobbyists sourced components themselves and spent unbelievable amounts of time tweaking. My PrintrBot is a well priced kit building on these many years of open sourced achievements. I doubt there is much profit margin for PrintrBot. Also, you will have noticed a lot of plywood on my printer; that’s laser cut plywood. It has issues but is an inexpensive way of producing the parts – once a laser printer is purchased the cost of chassis etc is next to nothing for the smaller scale manufacturers compared to the material/hours required to 3D Print parts as was common in the early days.

A kit gets to you to where these serious hobbyists were far quicker for a reasonable price.

Take a look at these pictures for an impression of what to expect:

Their website says it takes two hours to build. Perhaps once I’d built a few that would be true. In reality it takes about five hours. Documentation is poor and was incorrect in several places as incremental design changes have occurred since the videos/ online help were created.

How long until Printing is possible after Assembly?

Likely some owners obtain a decent print on their first attempt; most do not – actually possibly most never do before putting it in a closet or on eBay! Expect several hours to several before anything reasonable is printed.

ABS filament emits fumes which made my eyes sting, but is much easier for a beginner to use than PLA filament. Since my workshop is small and unventilated I have moved exclusively to PLA, ensure your printer is in a well ventilated area and start with ABS.

Hopefully time permits me to write a separate post on getting started, but main tips are:

  • Level the bed
  • Ensure z-home is set correctly (distance from extruded nozzle to the bed)
  • Ensure the printer extrudes at the correct rate (pretty easy for ABS)
  • Figure out the slicing and printing software
  • Use calipers to roughly calibrate movement along the x, y and z-axis (fine tune when printing ok)

A common beginner issue is the clog up the hobbed bolt with filament (tension springs set incorrectly will do this).  I’ve had mine out for cleaning at least twenty times, but it’s been fine for a good while since I figured out the correct spring tension for PLA (springs compressed to ~13.5mm). The following image shows a clogged hobbed bolt being cleaned with a needle (I lost one needle so now store it using a strong magnet to fix it to a drywall screw):

What kind of quality can I expect?

Quality is really good in my opinion. I’ve printed several printer upgrades and the precision is incredible; hex bolts drop right in where they should etc.

It will take a while and likely much frustration to get the printer dialed in. After about sixty hours I seem to have the basics down and when an issue occurs now know what to tweak. Take a look at the next photos to see my progression with PLA:

The final print shown is a case for a Raspberry Pi. It fits perfectly, and this was printed before I added most printer upgrades!

Tools required?

Hmm… this is a tricky one. As a long time DIY type I have access to a vast array of tools. The Drill Press and cross vice shown below has been particularly useful but these are not beginner tools.

At a minimum you need:

  • Precision Calipers (only about $20 on Amazon)
  • Quality screwdriver set (one with lots of quality bits is fine)
  • Jewelers precision screwdrivers
  • Small wire cutters and pliers
  • Tweezers (to tease strands of stray extruded filament away from the nozzle and lift prints)
  • Very sharp craft knife

These are almost essential:

  • Quality precision pliers, angled pliers and wire cutters (Xuron or similar)
  • Quality oil/ grease
  • Circlip pliers
  • Telescopic Magnetic Tool to hold awkward nuts in place during assembly

Ensuring bolts thread perpendicularly:

Final Words

Have a spare fifty+ hours, $550 and buckets of patience? You should buy one now!

Remember this is not Software Engineering; interacting with the real world is a whole different ball game. Fellow classmates and I discovered this during our postgrad Robotics Degree . Software is predictable and repeatable. The real world often not so much. In many ways 3D printing is similar to robotics – some software is involved but there is a lot of trial and error + tinkering.

Posted in Uncategorized | Tagged , , , , , , | Comments Off on Should I buy a 3D Printer?

Compacting Virtual Machines (VirtualBox and VMWare)

Google has never linked me directly to this information, just theories. One day time permitted me to run careful tests so I am sure these techniques are correct/ efficient:

Simple tricks to reduce size VM’s Disk Needs:
These will wipe GBs from your vmdk/ vdi.

  • Disable Windows hibernation (hiberfil.sys is the size of installed memory, you don’t need it)
  • Disable the memory paging file (paging file in a VM makes little sense to me)

Compact a VMDK (VMWare including VMWare Player):

  • Ensure you have no snapshots (as of writing compacting does not work with snapshots)
  • Launch the VM
  • Inside the VM defragment its disk (defraggler works great, Windows degfrag is ok)
  • Inside the VM run “sdelete.exe -z” from DOS (as admin). This zeros out the free space and is an essential step
  • Shut down the VM
  • From VMPlayer: Edit Machine Settings -> Hard Disk -> Utilities -> Defragment (optional step, sometimes helps – official documentation is poor)
  • From VMPlayer: Edit Machine Settings -> Hard Disk -> Utilities -> Compact

At the final step you should see a huge reduction VMDK size.

Below is a screenshot showing the features in VMPlayer. Remember this is next to useless unless you run Mark Russinovich’s “sdelete.exe -z” to mark free space with zeros. Compacting VMs has been this way for years, it’s April 2013 now and surely soon ‘detect and zero free space’ functionality will be built into their compact options.

The image above shows a VM that reached 20Gb once, before being compacted back down to 10.7GB. These are typical results. Once compressed my two work VMs zipped down to ~4GB each; fine for archiving working databases, dev environments etc. One customer’s backup procedures left me concerned so weekly the VMs were AES encrypted and copied to a USB key chain flash drive.

Compact a VDI (VirtualBox):

Until very recently I have used VirtualBox since about 2008. Here are the steps to compact it.

  • Ensure you have no snapshots (as of writing compacting does not work with snapshots)
  • Launch the VM; inside the VM defragment its disk (defraggler works great, Windows degfrag is ok)
  • Inside the VM run “sdelete.exe -z“. This zeros out the free space and is an essential step
  • Shut down the VM
  • From DOS (as admin):
    • cd <location of your VDI>
    • “C:\Program Files\Oracle\VirtualBox\VBoxManage.exe” modifyhd <your disk’s name>.vdi –compact

Hope this helps folks. Any issues/errors please post in the comments and I’ll update the post.

Posted in Uncategorized | Comments Off on Compacting Virtual Machines (VirtualBox and VMWare)