Phase I: (Idealism) Treat building software like building a bridge. Analysis --> Design --> Development --> Testing --> Implementation.
Phase II: (Pragmatism) Realize that Phase I doesn't work. Try to figure out why. Decide that the weak link is Analysis, i.e. we're no good at estimating and hitting deadlines because we never have good enough specs. Devote your life to the art and science of getting the perfect spec. Implement the methodology du jour to make that happen.
Phase III: (Realism) Realize that Phase II doesn't work. Decide that no existing Analysis methodology works because no one knows how to conduct analysis, no matter what the methodology. Focus on optimizing the practitioners with better education and 10,000 hours of real-world experience.
Phase IV: (Enlightenment) Realize that none of the other phases works because we've been focusing on the wrong constraints all along. Instead of optimizing scope, quality, or quantity of the product, optimize the schedule. Adopt a new model: define the deadline first, then build whatever you can by that deadline. Hit your deadlines every time, not by estimating better, but by not caring how pretty your deliverable is. Eliminate the problem of estimating by making the deadline a constant. Eliminate the problem of analysis by replacing it with prototyping and revising. And eliminate the problem of blueprints by adopting a strategy of inventing, not engineering.
Phase 0 (Big Corporate IT): PM is given deadline by business, goes through song and dance of negotiating deliverables (without any analysis being performed or developers consulted for estimates), is forced to start work with a small team of internal people outside of the business area not resourced to any projects currently, but who've coded a few lines of VB so can probably do a web application scaling for 5,000 users. PM then hires outside contractors (probably with even less knowledge of the business) to fill in the resourcing gaps as the deadline looms and the demo still doesn't work.
No real objective measurements for project success are defined or put into place, but at least the executives can say they're "embarking on a technology-driven initiative to improve [area of the business they're responsible for]" and so they meet their bonus criteria for the year, and if they've done everything correctly, the executive's contract will end before the project goes red.
If all doesn't go well, well, that's why you have PMs! They're the "responsibility buffer" - the guys you get to blame and fire rather than the actual executive who initiated the project and failed to provide real support and leadership.
They're the "responsibility buffer" - the guys you get to blame and fire rather than the actual executive who initiated the project and failed to provide real support and leadership.
Sorry I disagree with this. I haven't seen Senior Execs getting fired. In fact its almost always the other way round. The Execs take credit for nearly all success even if they don't move a finger to make it happen. It happens so often Excellence awards and cash rewards go to executives 'for showing leadership and successfully leading team to success'.
If things go wrong its always 'Programmers didn't do their job properly so what can one exec do?'
Just to note, proper PMs are rarely without a job, they're pretty hard to find. A good PM is paid more than a good developer, usually silly money, because true project management is a pretty large skill-set.
What you're describing is the enterprise habit of giving a generic middle manager the PM role and expecting them to have all the skills and then firing them when said project implodes.
Management gets started so quickly to meet their quarterly bonus - not that they don't get an annual one. The quarterly bonus allows them to synchronize the announcement of initiatives with Wall Street.
I once interviewed a project manager.. asked how he came up with deadlines. He said "ask the programmers and subtract 20%". Talk about a red flag in an interview!
A recent conversation I had with my MD, regarding how long to estimate for a project, involved me adding 6 months for each feature added to the specification.
We're also careful emphasise how inaccurate these estimates are, and we don't really know until we start.
Of course this is the initial "finger in the air" stage. We try to be a little more concrete with more immediate requirements. Of course, we've gotten those badly wrong too.
That's a good way of explaining the old addage: Timely, complete, low-defects. Pick two. In software, it does seem like timely and low-defects should be fixed, since "complete" implies some knowledge about what the software should be, and we're rarely right about that ahead of time :-)
" Hit your deadlines every time, not by estimating better, but by not caring how pretty your deliverable is."
I'd kill for some Phase IV PMs. :)
(Actually, it's rare that I meet Phase I PMs. Very few PMs think in terms of developing software across those 5 stages. To them it's Photoshop -> Make Photoshop Mockup Do stuff -> Fin)
But how does that help the customer (be it internal or external)? How is he supposed to align capital, human or financial? For example: when does he tell the marketing team to start the big launch campaign, etc?
An example would be something like a main-stream game, say Battlefield 3. The have marketing videos months in advance, they sell games in advance and by doing so they promise that the product will be available at a certain point in time. It may contain bugs, but, unlike what you say, it cannot just be whatever is ready at that time (say without support for sound, or not controls)...
This may come off a bit cliche... but their is the alternate Blizzard Model (Yes, I know blizzard isn't the only one doing this)... Blizzard has tended towards the "It will release when it's done" model in the past.. and recently with Diablo 3 their model had been identical until the very end where they cut half (maybe less than half but it felt like it) of the game out of the final deliverable to decide when they were going to deliver (They cut the pvp content though they intend to add it back later).
In this way I think blizzard provides a good counter example for A) Setting deadlines at the beginning and B) Not shipping whatever is done.
I like your idea about constant deadlines, but I'd add a twist. In a novel writing course I'm attending, I know what I want my character to say as her last line, so I build the dialogue backwards starting from that.
I should do the same with a deadline: estimate what is to be delivered on that date, backwards. This could be a good method for reaching a consensus on scope with customers, and they would understand that meeting the deadline is a given.
Then there are those who care very much how pretty the deliverable is (Eg Apple) but who also hit their deadlines. How? It's not via Phase IV as stated.
Apple drastically cuts feature scope early and often. The result is products that seem a little feature light to many technophiles and reviewers, yet sell like hotcakes to the real customers. How many companies could resist putting a camera in their hot new phone when they know there will be many negative comments on that exact point?
Apple is willing to wait and get it right in the next iteration.
They are doing a variant of Phase IV. The PMs are disciplined and upper management has the mindset of what we would hope for from sharp PMs.
NP: How long will it take?
P: It'll take as long as it takes.
NP: I don't understand. What does that mean?
P: When you tell me that the product is finished, it's done.
NP: You mean you can't estimate how long it'll take?
P: I can't estimate when you'll decide that it is done, therefore an
accurate prediction is impossible. From experience I know that a
project like this can take from three to nine months. A lot
depends on you, the choices you make and how well you communicate
them to me. A lot also depends on how many times you change your
mind during the process and how many times you change the
specifications during the job. Major changes can even cause the
project to be scrapped and start over. Rest assured that I will
not be the one slowing this down.
NP: How much will it cost?
P: It will cost as much as it costs.
NP: I don't understand. What does that mean?
P: You just learned some programming!
Go back to your first question and re-read the entire thing.
This is called an endless loop.
P: Rest assured that I will not be the one slowing this down.
It would be hard to trust a programmer who was so comprehensively trying to avoid taking any responsibility for anything.
Of course many factors will affect the overall time and money required to complete a project. Any decent manager knows this. But a programmer who can't even estimate a project given a reasonable set of assumptions is just trying to cover his ass. IME, he almost certainly will also be the guy who will slow things down, because being completely unaccountable for his performance he has no incentive not to.
This takes a fairly narrow view of a development process, IMO. The "but you didn't specifically tell me to do X" attitude can border on the irresponsible and/or lazy. It's all nice and well to expect accurate step-by instructions to be communicated by a PM, but the onus also does fall on developers to meet them half-way.
While machines may require explicit and literal step-by instructions, a human being value-add is in thinking rationally, making inferences and (when necessary) asking questions.
This is the difference between an engineer and a programmer.
That programmer is giving literally correct answers but acting like a passive subject in what should be an active relationship.
Our role as engineers is to push back on all of the possible negative choices/directions a product-owner or business person makes and guide them to the simplest, scoped solution that solves their problem.
So you have an easily identified problem to solve.
Specification and Responding to Change are not mutually exclusive. You do the first. You get an estimate. When a change is mandated, you specify the change. You estimate it. When you discover a feature was under-specified, you re-specify it.
You wouldn't find out your code doesn't work, and then not update the tests. Why would you treat specification any differently?
When the 3 month project ends up being a 9 month project, you have clear, actionable documentation on where your process needs fixing, and if the developer truly did hit all their marks, they get their well deserved rewards/kudos despite being saddled with a poorly managed project.
There are projects (I'm sure) that are practically impossible to specify well. I've never worked on one though. Web Development is, by it's nature, a rather straight-forward process. The more unpredictable portions (design, data-migration) can be sandboxed in a pragmatic manner.
I've been a part of projects that have run very well and satisfied the client despite changes doubling the original project cost. That's what Change Requests are for. To make sure you're all on the same page and everyone knows what they're paying for.
The fallacy of Agile for the projects I've worked on is that the clients will be delighted to recieve "what they want" if they don't know the ultimate timeline, can't make their own plans around it effectively, and have payed a lump sum of money thinking they were buying X, but instead recieved X--.
Since let's be realistic, when you talk about shifting to an "lean" process in a fixed-bid environment, you're not talking about delivering a better suited to the needs Y to a client who paid for X. You're talking about delivering a subset of X that fit within the time allowed. And who ever walked away happy when they got less than they paid for?
If you take for granted the Perfect Specification is impossible, you don't have to shoot the whole process in the head either. Would it be valuable to spend a few minutes post-design on deciding what Validations apply to the Form fields presented? Or how to map those fields to your data-store? Spend a few minutes thinking about what business-logic applies "in the background" once that Form is submitted? What happens on error? What test-cases should be defined to ensure we get the basics covered like HTML escaping, CSRF protection, etc?
Even a deliberate attempt to cover half of what you might ultimately need to consider to deliver this feature makes the rest so much easier, more predictable and easier to estimate.
The dirty secret is estimates (again, speaking from experience in Web Development) aren't all that difficult. It's just a Garbage-In-Garbage-Out situation. I've been doing this for 10 years. Estimating the time it'll take me to write a Form is not difficult to do accurately. Just like I can't off-the-top-of-my-head estimate the effort involved in writing a Web Framework or O/RM before breaking it down into Routers, Actions, Templates, etc, I can't estimate a Feature without breaking it down into smaller chunks either.
The overarching theme in most of the comments to this post is that estimates are hard, impossible or not worth doing. I respectfully disagree on all counts (within the context of web-development anyway, which is all I can speak to here).
It certainly doesn't benefit the Developer to have all the accountability and none of the authority when working on an extended project without strong Project Management backing him or her up. So it seems to me pretty self-defeating when instead of figuring out what works and what doesn't, like your average Developer would tackle any other problem, instead the popular opinion seems to veer towards throwing out the baby with the bath-water and absolving oneself of responsibility instead.
The Executives are going to blame someone, and it's rarely going to be themselves. At least not in any meaningful way. Like it or not if you're working on a Doomed Project, and you don't have someone to back you up, realistically you're going to be looking at a reevaluation in the capital of your own reputation. Fair or not, that's the way it works in my experience more often than not.
I think one common problem with estimates is the assumption that all estimates are equal. We all know they're not, unless it's an estimate for something where the developer has solved a similar problem several times before.
Otherwise, you might have only a general feel for the magnitude of the problem at first: this is a 6-12 month kind of project. Perhaps a few days of prototyping and proof-of-concept tests will narrow that down a bit, though: we'll do the work in three stages, likely to take 2-3 months, 2-3 months and 3-4 months respectively, putting us in a 7-10 month window. As each stage develops, the reliability of any estimate for remaining time tends to increase, and perhaps a clearer idea emerges for the later stages as well.
In short, an estimate isn't worth much without some indication of confidence attached to it.
FWIW, my personal preference is to aim for about four qualitative levels: very general magnitude from initial overview of the project, more systematic estimate after clarifying detailed requirements and probably a bit of prototyping or proof-of-concept work, definite target date unless unexpected major disruption or change of requirements happens, and very confident once the project is into the final stages and no major hurdles remain.
I actually think that the same principles apply to _any_ form of creative activity.
Let's say I'm going to write a 300 page book. I can take my typical wpm speed, do some math on it, and say that it will take me X hours to do this. A book, on the surface, should fit into the "size and speed" heuristic. But of course, this is nowhere near the case. No decent piece of writing ever comes out without many many iterations of rewrites.
Writing a piece of music? Same thing. Making a clay pot? Even great masters take many tries before coming up with something worthy of the public eye.
Software, even to my novice eye, is similar. But the difference is that the person with zero experience doesn't know instinctively that software also falls into this iterative, creative process. We've been trained and educated to know that writing, composing, and sculpting are iterative creative processes that don't have a linear measurement scheme. We spent hours upon hours as children and young adults writing and rewriting, drawing and redrawing. We know from personal experience that things hardly go "as planned".
But for something completely foreign to us, such prior knowledge is absent. We have no idea. So we take what we can see, and extrapolate. "Hey, I see just one web page, that can't be so hard... (insert mysterious thought process) how about XYZ days?"
The fact that I took a few years of programming courses in college (I was an EE Major) has probably helped me in keeping things sane. I've written enough code myself to know instinctively that both development and debugging are far from linear processes. I know that I have no way of estimating how much time it will take devs who are much more capable than I am to develop/fix something. So I cede to the guys who know best. I take their word and estimates in good faith, and focus on swatting the 'flies' away so the team can focus on the task. What if we don't hit our estimates? Well too bad, but stuff happens. Time to reassess, buckle down, and go again.
I agree completely but I think there are two things that make software slightly unique from any piece of art:
1. We have a cultural context that tells us how hard it is to produce something like a song, or a painting. There's a reverence for it. And so people tend to back off of their estimations for the amount of time required to produce it.
2. Software isn't physical. You can hold a big book in your hand say CRAP! that must have taken a long time to do. But you're just looking at the result of a program, if it's done well, it's a tiny streamlined user interface. The result of one button click could have taken thousands of programming hours and thousands of lines of code but you would never know it. Software obfuscates complexity from the user in a way that someone looking at a huge huge painting or a big book doesn't run in to as much.
So if we're talking about the efficiency of natural heuristics in estimating things, I think software is still slightly more difficult to estimate than art. But I agree with your overall point.
1. Definitely agree. But within hacker circles, I am sure there would be similar reverence for a beautifully written algorithm. So I'd venture to say that the issue isn't something that's inherent to software, but rather relative to our cultural/societal upbringing and education. There's the possibility that software may be understood by the larger population in the (hopefully not so) distant future.
(I'm probably aided by the fact that many of my good friends are coders, so even if I myself am not capable of writing such great pieces of code, I am constantly developing the crucial context and understanding for the creation process, and what kind of people/work goes into building something great)
Also, I have another suspicion, which is that the majority of work that's performed by the people who make such requests are in fact, linear. What kind of work do the ground level sales/marketing/pm types do on a daily basis? Whether it be crunching numbers or writing documents, making phone calls or preparing for meetings, most tasks are measurable and predictable. During my poker playing days, I remember the enlightening phrase, "We expect the other player to act the way we do when placed in a given situation (ie against certain bet patterns/boards)." Similarly, if our daily work is linear and calculable, then it is in our nature to expect the other kinds of work which we cannot completely grasp to be linear and calculable as well.
2. A better analogy for writing would then be a poem, perhaps even a Haiku (which only contains 5+7+5 = 17 syllables). Writing 17 syllables is something we can do in under 30 seconds; writing a masterful work of poetry is something that could take years. (but I completely agree with what you're saying :))
Regarding point 1, there are people who trade in songs and music, and I can assure you they have less reverence for music than the average person.
In many ways, it's generally the consumer or purchaser or something that undervalues it. You have two camps of people, those who commission a work out of reverence to the medium, or to its author/artist, and those who simply trade in the resultant good.
A song might seem like something an artist painstakingly crafts over the course of their life to someone in the first camp, or as something that can be banged out in an afternoon to someone in the second, like an ad exec, or a pitch man.
As developers, we often deal with our own consumers in the form of project managers or product owners who expect a ship date, and need for the result to be fast, but we aren't exposed to consumers of other art because we (generally speaking) aren't artists who trade in that business.
And one of the things that throws some software estimates out the window is when a developer/programmer gets in the zone. There have been times I've hammered out a problem that I had spent days on in a few hours when the inspiration/flow/etc. hit. Something I would have guessed would have taken me a few days more, I end up hammering out in 3 hours. And when you pull these kinds of tricks...
People think you can do it again... and again... and again as if it was something you could just conjure out of thin air. There are many papers written on this topic. I've been playing around with a few ways to achieve this more frequently, and it has worked to some degree, but it's not something I could rely on in the heat of battle.
We should always deliver a physical copy, on paper, of everything written during a project together with the invoice. Every whiteboard brainstorm, every source code diff and every email sent and received. Should make a decent stack of paper to give the customer a sense of perspective!
This reminds me on what Zed Shaw said in The ACL is Dead, when he compared programming to publishing magazines. Lots of people collaborating in creative tasks aiming for a deadline.
How does magazines manage to work deadline driven and still be able to produce good creative work?
I'm a music producer, and this is a constant problem for me that after many years still boggles my mind.
The root of the problem is that producing a modern style recording of music is resource expensive. Far more than anybody would like it to be.
The resources can be time/money/talent. So there is an intense bias on everyone involved in the project to under-estimate.
I usually work on low budget projects. You can actually accomplish a lot with a low budget. But as you start to climb the "quality" scale the costs increase much more quickly than the quality. To get a product that you might consider subjectively twice as good might cost ten times as much.
At core, I feel like there is a fundamental principal of information organization. It takes a certain amount of resources to undo entropy and organize information. When we experience a piece of work we can intuitively feel that entropy has been significantly reduced, we appreciate the accomplishment. (Even if we are not in the field of endeavor we still feel it though we don't know how it is done.)
One specific point of difficulty in estimating time in music production is that there is an x-factor. It derives from the need to create something exceptional, which has both subjective and objective elements.
With an experienced production team you can predict how long it will take to produce something competent. But usually there is a point where we have all done the things we know how to do: the song is arranged, the players are good, the production environment is good, the instruments are good. But the song is missing something. What is it? It is not known yet. So an iterative trial and error process begins. Let's try a different singer. Maybe we should borrow my friends guitar. Let's keep recording. Let's rewrite the song.
Hopefully a solution is arrived at, but since it is a trial and error process it is impossible to estimate how long it will take.
One of my favorite ones was a fellow who wanted an iPhone and iPad game done. He proclaimed that it had to be done for $10K. This game easily required six months of work. It also required server-side development and support. He also had no idea that every so often you need to fix, I mean, update, your apps because new iOS releases might break them.
I was open-minded and decided to invest some time educating him on the process and the needs of the platform. Once he had enough information he networked and hired someone out of India to build him the game. I was not happy about that at all. Then I learned that he was having all kinds of problems with the process (and the app) being a total mess. He's learning his lesson.
Come on now. Don't blame the programmer being Indian for the problems your friend is having. Good programmers anywhere cost almost as much as they do in the US now. Globalization means that an Indian programmer has the same access to salary information in the west. He/she might knock off 10% to be competitive.
You're friend could have easily got a cheap, inexperienced programmer anywhere else in the world, including the US, and still had the same problems. He had a budget, you knew it wasn't enough for what he needed, and he came up snake eyes.
Who says I am blaming the programmer for being Indian? I'm not. Excellent work being done out of India. Here's the problem, lots of people like this fellow look at India and China to find the lowest possible bidder out of complete ignorance. And, when they find the lowest possible bidder they get exactly that: crappy inexperienced programmers.
If they searched for the lowest possible bidder in the US or Europe they'd get exactly the same thing.
I did not intend to imply that Indian programmers are not good. An ignorant fool looking for rock bottom prices will, more than likely, find them in India rather than the US or Europe.
If an unsophisticated client looks to another hemisphere to get a one-off iOS app built I wouldn't bet "world-class talent at a 10% discount" was their hiring strategy. More likely "I can get this done for 1/3 of what this guy is quoting me? What could possibly go wrong?"
Another thing which I think kills accuracy in estimating development time is the ratio of time needed for "figuring out what to do" and "actually doing it".
If you're learning a song, the first stage is very quick. You think about the problem for a moment and you decide on a plan like "Read the sheet music and try to play the song. Focus on areas you have trouble with. Repeat". The bulk of the time taken is spent "actually doing it", not "figuring out what to do".
The thing about the "figuring out what to do" stage is that often in order to estimate the time needed for that stage, you basically have to have already completed it. The answer could come to you out of the blue, it could take weeks of thinking into dead ends. And with software, that first stage can be the bulk of the time. Software is a more like writing a novel than learning a song. The thing that takes longest isn't the actual writing, it's the figuring out what to write. If someone asked you how long it would take you to write a novel that gives insight into the human condition by following the story of a family caught up in a bloody civil war, what time estimate would you give them? I expect your typing WPM wouldn't be a big part of the calculation.
You only know how long it's going to take once you know how you're going to do it, and once you know how you're going to do it, you've already done most of the hard work.
As a programmer I know I can use my prior experience building similar things to estimate how long each feature will take to implement.
Here's where this breaks down for me: it's never similar. Sure, the functionality from the user perspective might be similar, but with the pace of change in development tools, platforms, and frameworks, it's never the same. Seems that every project involves some major piece of kit that I've never used before. In the last five years I've used ASP, ASP.NET, C#, PHP, Lua, Python, Erlang, Oracle and PL/SQL, MySQL, PostgreSQL, SQL Server and TSQL, OpenLDAP, XML, XSLT, JSON, javascript, jQuery, Scriptaculous, and other things built out of those, and probably a dozen others I'm forgetting at the moment.
Yeah at some level databases are all basically the same and languages are all basically the same but at another level they are not and that's where the time sinks are.
The more I think about it, the more I like this metaphor. Because you CAN make SOME estimates based on what's in the closet. If it's a huge closet, you could guess that the house was fairly large. If there are a lot of coats of different styles, you might infer that many people live in this house. But you're still making a guess about something you can't actually see because you haven't been given enough information. Brilliant!
The problem is no one has locked you in the closet - just open the door and look around. Heck, you have a whole hour to look around the house and estimate how long it takes to build the thing.
You are limited by how long you can spend inspecting every nook and cranny of something, but you're only limited by time. The "door" metaphor is misleading. Nothing is standing in your way from exploring the space.
If you can find the "hard parts" of the house faster than someone else, you're likely to be a better estimator. In fact, my favorite trick is to focus on one hard part and extrapolate up. For example, write down how tricky it is to build the kitchen, or heck just a cabinet in the kitchen, so everyone gets it when you say "oh, by the way, there are 10 more rooms."
At Bigco, you can't start opening doors until you give them your estimate and get it approved. There's no way to know how many more doors in how many more rooms there will be until you've been in all the rooms and opened all of the doors.
Its fractal. That "last" door could lead to 100 more. Oh and its almost always a horror flick with axe wielding maniacs(1) and booby traps(2) in each room intent on your destruction.
(1) The boss's boss's nephew who's a "computer whiz".
(2) $300,000 "developer tools" you have to use by fiat policy because, dammit, we spent a lot of money on them.
Knock on the walls, scream and see if anyone hears you, listen for sounds through the walls, if you do this for enough houses before you got a chance to open the door and walk around you might be able to make order of magnitude estimates. Always assume the house is bigger than you think.
"[Jobs] pushed Steve Wozniak to create a game called Breakout. Woz said it would take months, but Jobs stared at him and insisted he could do it in four days. Woz knew that was impossible, but he ended up doing it."
The infuriating thing for me is that people don't get that these monumentally fast development achievements still involved a massive expenditure of personal effort. Thus the "It'll take a few days of coding" customers of the gp are only more annoying if they cite examples like yours, expecting a complete product in a few days.
Not only that, but they require all the skill and experience of truly world-class experts in their fields.
Yes, Linus can throw together an amazing dvcs that scratches his particular current itch in a weekend. The just-graduated CS major you're pitching your idea to? The one with three whole php and/or ruby websites under his belt? He'll be able to do that too - after spending 20 years running the worlds most popular open source project and knowing exactly what he needs his version control system to do. Give him a weekend to do it right now? You'll get _something_, but it sure as hell wont be git.
Same with Woz - there was probably nobody else on the planet who could have written breakout for the AppleII in under a month. I don't know whether Jobs was brilliant enough to know what he was asking was possible for Woz, even when Woz told him it wasn't, or whether Jobs just strongarmed Woz with unrealistic expectations and took credit for the success (and laid blame for any failure); but if he'd asked anybody else, they would have failed.
Whatever your idea is, chances are _very_ high that if you think it could be coded "in a few days", you don't actually understand the problem (and it's solution) yourself.
I can/have built in some amazing stuff over weekends. But generally the requirements for that are, and in no specific order.
1. I must be in a flow.
2. I am using a technology very familar, so I know
the tools and many things.
3. I am doing something new. Not maintaining some ones
code or interacting with an external team.
4. No disturbances and other office distractions like
office emails.
Building new things are different than maintaining. Many a times I'm waiting for some other team's inputs to proceed. Or I'm required to fix a list of X number of bugs which can't be fixed until I understand the underlying code base well.
Create this webpage in 15 mins kind of arguments don't work because you are adding something into a working system and you can't generally do so until you understand everything about the system.
Doing something new is different. You definitely can get to the first prototype if you know what you are doing, what you are using to do it and no distractions.
Look Mate, By no means Am I downplaying anybody's accomplishments nor am I trying to say that anything can be built quickly.
Rome was not built in a day. Neither was Git as we know now. But given a narrow set of requirements you can get a prototype out pretty quickly. Especially today when so many reusable things are available. Linus coded up a distributed vcs(Not with all the feature we know now today) sufficient enough to host itself on a weekend.
Its also depends on the Individual. Some people can sit for long hours at a stretch and focus on problems till something worthwhile comes out. Others can only do little per day.
In this case it looks like a rewrite and rewrites are by no means doing something new. By now he would have had a perfect idea of what he needs to do in the entirety so he would be writing per that design one thing at a time. That is totally different that writing a self hosted vcs from scratch on weekend and then iterating and adding features over years.
Reminds me of "open textured" in legal philosophy: the idea of precedent in law is you can use it to predict how a court will decide in future. This mostly works, but if you have a bunch of cases (data points in multidimensional space), simple interpolation isn't always accurate. You need to look at the specifics of the present case (there's still uncertainty because each judge has an individual perspective, but that's a different issue).
It's a little bit like a fractal: just knowing some data points won't always predict - even roughly - what happens in between (but in flatter regions, it can).
And Turing showed that (in general), you can't predict the result of a computation except by actually doing it.
The experience the article mentions only helps when a specific domain reoccurs, and known tools have been developed for it, with known characteristics. Now, if you know that domain and the tools, you can do prediction pretty well. I think of it like an experienced tradesman.
However, when you go into a new area (and a new one is created every decade or so in programming), you're back to zero in many respects. e.g. the existence of the right library can save 10,000 hours.
That makes me wonder what other disciplines there are that have the unpredictability and open-endedness of software. I read that on large construction projects, a 1% overrun is considered failure. That is clearly a different universe. Who else is in our universe?
I think it includes anything involving new knowledge: e.g. law, science/mathematics, invention. Maybe also in the academic Arts (I don't know enough to be sure), and also in the creative arts - painting, sculpture, writing (there's exploration and massive overruns there). Also, exploration in prospecting, oil discovery - though that's not really open-ended.
OK, I think it's being able to open up/zoom into finer detail (like knowledge, recursive structures, fractal) + discovery.
Mathematicians are. They couldn't tell if Fermat's last theorem was solvable in a day, or a year... or not by the efforts of many of the world's greatest minds for centuries.
Agreed. Math is art with proofs. Or formalized imagination. Or something. It would be significant if that's the closest thing to what we do in software. (Obviously, math and computing are close as theoretical disciplines, if not overlapping totally, but that's not the same question.) Is there any kind of construction or engineering that comes closer? Because those are the analogues people have mostly (and largely unhelpfully) turned to when trying to understand how software projects work. People say "engineer" when they mean "programmer"; they never say "mathematician".
the existence of the right library can save 10,000 hours
Can anyone define for me the difference between a library and a language? If I take all my functions and put them in a "library" and then just call them, have I made my program shorter? What if I define them all as built-in to the language?
Intuitively, those seem like tricks and not genuine simplifications. Why don't they count?
Perhaps, moving into a library means it must be a clean abstraction and reusable - there's some legitimacy to considering it no longer "part of your program", so your program's shorter. If it is in fact used by other programs, by distributing its length over them all, your first program literally is "shorter" even including the library cost.
The library/language distinction seems less important these days (e.g. standard libraries are often thought of as part of the language). There is (usually) a syntactic distinction, of whether it's a keyword (global availability and tightly limited), or namespaced in some fashion (local availability, and limitless/open-ended).
ans: 1. syntactic/expandability, 2. yes, 3. yes, 4. they count in the wider system (as OP, I just meant you don't have to write it)
Your suggestion that a library's complexity/cost be amortized over the programs that use it is an interesting one.
What I really want to know is when a program counts as independent of another program, so its length (and thus complexity) can be measured separately. Terms like "clean abstraction" don't clarify this for me.
Is it as simple as saying: if my program calls your program and your program knows nothing about mine, then yours is my "library" and I need not add its size to my total?
If I make a system and call parts of it a "library", is it less complex than if I call everything the same program?
Sounds like mathematicians defining their assumptions - it's up to them where they draw the line as to what they will work on themselves, and what they'll accept as a given. i.e. it's arbitrary.
Taking it back to turing machines, you can simulate any specific design/implementation of a turing machine on some another turing machine, given a program of constant size (an emulator for the first machine). This can be applied to turing machines which include libraries of arbitrary size. So again, it's up to you where you draw the line.
I wonder if the confusion is to do with why you want to draw the line?
I doubt these answers will satisfy, because I'm not clear on your question. I find such questions easier to answer if I get clear on the purpose - why exactly do I want to know how complex a program is? If it's purely out of curiosity, what is the nature of that curiosity? Complexity, from what point of view? how am I thinking about it? What am I worried/concerned about?
BTW: I was going to use "amortized", but it seems specific to over time (an online dictionary supports this). But it does feel correct - is it a legitimate generalization?
EDIT I really think my first answer was best, so I'll flesh it out a little. I understand your question as being, when we measure the length of a program, we can factor some of out into a library routine, to make it shorter. But isn't this shifting things around just cheating? If it simply was cheating, then I don't think you'd have any question about it. I think the puzzle is that it does seem legitimate, in some way, yet also seems like cheating.
My answer is that instead of measuring the length of your program, you measure the aggregate length of all the programs on your computer. From the perspective of this larger system, if that part that was factored into a library was also used by other programs, then it would make the total length shorter. The "clean abstraction" isn't an important consideration in itself, but comes into it only because that's the (a?) way to make it reusable; and being reused is crucial for factoring a component out into a library, to decrease the total length of all programs.
[ One could generalize beyond "all the programs on your computer" all possible programs, weighted by their probability. The probability captures the usefulness of the programs (there's many more nonsense programs than useful ones in the set of all possible programs). This is important, because a uniform probability density wouldn't enable factoring out to make the total shorter - you need some regularity/redundancy for that, which requires some unevenness. Typically, programs do have commonalities because the programs they solve have commonalities. ]
Basically, taking the widest view, would factoring this aspect out make the total system shorter?
I think this explains the sense that factoring out some code into a library does make it shorter, even though when looking at that program in isolation it's just moving things around.
Here's why I'm interested. The most important thing in making good software is to eschew unnecessary complexity. A program's size is the best measure of its complexity. Therefore we should try to write shorter programs. Much, much shorter programs. (See Alan Kay's VPRI project etc. etc.)
It's easy enough to measure the size of a program: LOC or something like it. But how do we define what a program is? Surprisingly, that's not obvious. As you say, how one carves it up seems arbitrary. The conclusion you came to is also one I've come to: you end up having to measure the total size of all the code that's running. But that seems absurd. (Unless you're a Forth person.) It means I can't measure my program's size without measuring, say, Linux's.
So to stave off the apparently absurd, we need a simple way to distinguish programs or systems from one another. That's my question.
If A and B both call each other, surely they are part of the same system. But what if A calls B, but B doesn't know about A? Does that make them separate? Sometimes one would say yes, for example if B is Linux. Can we always say yes? That would be convenient. But there are lots of such As and Bs in any complex program. That doesn't mean there are lots of programs.
I wonder if like so many other things in programming, this is in the end a social question and not a technical one. That's the trouble with referring to "reuse": it depends on how many people know about B and like it well enough to use it. It seems weird to say that A (or the system it belongs to) gets less complex depending on how many other programs use B. That would mean that a system's codebase could get more or less complex without a line of its code actually changing. Again - not an impossible view, but seems absurd.
I'm looking for a principle here because this way of thinking about complexity is a big part of how I work and because I want to have an entire company that thinks this way.
I think my bracketed aside covers this: by using a probability density, it's not affected by how many people are actually using it, or whether that changes. Of course, it's only a conceptual help, since you can't actually know this density: it amounts to knowing all possible tasks now and in future (a practical impossibility) and also knowing all the solutions (a theoretical impossibility). It wraps all that up in two words.
I think defining whether A and B are part of the same program is a red herring. If you think of their calls to each other taking place through an interface, they don't need to know about each other - they just expect something that fulfills the contract of the interface. Again, it's like a mathematician arbitrarily making assumptions: we'll take a component that does this as a given. So, A and B aren't part of the same program. They aren't related at all (except that each happens to fulfill some interfaces needed by the other). This perspective is similar to taking probability density, in that it replaces a concrete actual program with an abstract representation.
But how then does one measure complexity? My view is you can't, in practice. But we can have a concept that can guide us. Of course, when staying entirely within a single program it's clear that we should factor out commonalities to reduce length. But what about factoring those things out into libraries? I think an approach that works is to think about being in the business of selling those libraries - but not with the perspective of making money for yourself, but for improving the world. [again, this is an abstract concept, so we don't actually need to sell them and can ignore the real-world issue of marketing] So, there's questions like: is there a need for this, in many programs? Can it be reused generally? (or is it specific to program A - shades of Brooks program-vs-product here) Is it a clean abstraction (i.e. can it be used without knowing its details - this reduces complexity in the code that uses it). How much effort/LoC is saved by it? Factoring out code into a library that is broadly needed, is simple to use, saves a lot of code, is simple to use (doesn't require lots of code to use; solution doesn't create more problems - not the cure is worse than the disease) would be needed broadly.
Good libraries save the world!
I agree that in practice, it becomes a social (or community) issue, and whether it's adopted is important. You might have a great product that no one ever uses; or a crappy one that everyone does. It's messy, in that it's affected by non-programming issues like the particular alternatives existing at this moment, the need for it at this moment. [But again, a probability density circumvents all this]
Maybe it's a bit like whether the scientific method requires the communication of the results: you haven't done "science" until you tell someone. It also seems absurd - how can communication affect what you've already done? - but if you being with science being a community effort (like language, commerce and law), then it does make sense: you can't do science alone. And our established science was a community effort (notwithstanding that we wouldn't know of the other stuff). This is contentious - I mention it because it seems like a similar contention.
Finally, some caveats. Though I think complexity is central (Occam's razor) and even did a masters around it, it's not everything. For example, in English communication, redundancy helps by giving listeners a way to verify understanding, and different perspectives in case they didn't grasp it, or if the message was ambiguous and the speaker didn't realize it. and raw repetition in case they weren't listening or the message was corrupted/obscured. Some of these were significant in Shannon's original work on information theory. In communication in general, a constant background helps convey a changing foreground - as simple as the lines on a page of printed text; or looking at a file that is laid out neatly, so you can quickly orient yourself, and also notice irregularities.
In code, some repetitive concrete structure can be helpful to the humans reading it. Related to this is the concept of "accidental redundancy". This is when two pieces of code are identical, so you factor them out - but it turns out that they change on independent bases, and it was just a coincidence that they happened to appear identical syntactically. i.e. their meaning was different. You can't tell this just by looking at the code, nor by measuring the length. You have to understand the model beneath the code, and even more importantly, understand the problem being solved (though there might be no way to know the two things differ, until they do differ).
My over-idealization of short code eventually made me see the benefits of seemingly over-concrete representations. :-) But short code is still central for me, as it seems to be in scientific discovery. The simpler hypothesis/code is likely closer to the truth. And it's happened to me many times that simpler code generalizes in just the right way - before I realized that it would need to generalize that way. :-) Also, the Dirac/antimatter story is nice on this point: http://arachnoid.com/is_math_a_science/index.html (near the end)
You make some good points, but it seems to me that each time we come close to the practical questions, the conversation veers away from them again. The one practical detail I understand and agree with you about is that sometimes when one factors to remove duplication, the change makes things worse by blocking the evolution of one or more of the original passages of code. They were "false friends" as the French say, and the abstraction you thought you found was spurious. It takes a while to get a feel for when this is the case. That being said, it's not that hard and I don't think it has much bearing on the big question, which is: how can we minimize the overall complexity of a system? I believe this is (1) deeper, (2) harder, (3) more important, and (4) more doable than (as far as I can tell) you give it credit for.
defining whether A and B are part of the same program is a red herring
Do you accept that minimizing complexity is critical and that program length is the best indicator of complexity? That makes program length critical, and you can't measure that without knowing what counts as part of the program. What part of this argument is wrong?
I'm beginning to think it boils down to whatever a particular team or organization has to maintain. If you're Microsoft then you pay for the length of Windows, otherwise you don't have to count it. If this is true then it's entirely a social question, and what we should be considering is something like lines of code per maintainer.
The spurious abstraction issue was a caveat. The central problem is the hardest possible problem, IMHO; my suggestions are just what we can do about it.
how can we minimize the overall complexity of a system?
Nice clear statement. If we're minimizing complexity of an overall system that includes A and B, it doesn't matter whether A and B are distinct programs. OTOH, if we exclude one from the system, we can't measure complexity effectively - unless we're comparing code that uses the same environment e.g. A vs. A', and B is a library.
If MS provides an extensive framework, that doesn't quite suit your project, you may distort your project (i.e. add unnecessary complexity) to take advantage of it.
I'm still not fully clear on your question/purpose (though minimize overall complexity seems close). You want something you can use as a guiding value/principle for your company; also something practical. "Do you accept that minimizing complexity is critical..." Critical, for what? [FWIW, I think it's critical for finding truth, generalizing well, and also for beauty/rightness/elegance]
I'm now going to have a beer after all this! :-) Cheers!
I meant critical for building great software systems sustainably over time. That's also my purpose, if you expand "system" to include the team that's building it.
I agree with Alan Kay that the software industry has become a reductio of wildly inflated complexity and that it's possible to do orders of magnitude better. Presumably, anyone who figures out how to do that will be able to do things other companies can't. That is how the world gets changed. Not by persuading others to do it but by just doing it.
For a long time, smart people thought it was a matter of writing in more powerful languages, but it seems clear that it isn't that alone. It's not just the medium, but how we're using it. The 64 billion dollar question is: how can we use it better? And how will we know?
My hypothesis (not that I invented it, but I subscribe to it) is that we will know when we are able to build valuable systems with drastically less code. Obviously that requires knowing how much code there actually is. I'm fine with LOC as a metric (some HNers convinced me of that) but you still have to know what code you're measuring and what (e.g. libraries, language implementations, OS) you're not.
Interestingly, Kay's VPRI project doesn't have this conundrum because they're building everything up from hardware.
Striking some chords, I've thought on these issues too.
software systems sustainably over time
Aside: the related idea of business sustainability is a business issue, not a technical issue (to do with customer needs and competitors both changing over time). Clayton Christensen sees high-tech as going through phases favouring performance, reliability, convenience then price over time - what wins one phase won't necessarily win the next.
Warren Buffet sticks with low-tech (like a brick maker) because slow change enables advantages to be more sustainable over time.
reductio of wildly inflated complexity and that it's possible to do orders of magnitude better
Yes, their performance sucks, and also distorts the right way to do things (like the MS framework example). But does it depend on which "better" we mean? If better means "time to market", then the present stack is better... I respect much of what Alan Kay has said and done but note he's never had business success.
Not by persuading others to do it but by just doing it.
Yes, I think so too. :-)
It's not just the medium, but how we're using it.
I think it's specific abstractions, that suit the problems people are facing - and I think we're making steady progress. From arithmetic, function invocation, namespacing, way up to standard libraries (consider in sequence those of: C, C++, Java, Python) and now web-based APIs. Like scientific progress, it's not one single theory, but a great many specific facts and theories. Each a mysterious journey and heroic victory in itself.
BTW: There's that idea of using the analogue behaviour of transistors directly, instead of going through circuits, microcode, compilers and programs to simulate them... amazing performance, but hard to manage.
build valuable systems with drastically less code
I think libraries. e.g. we can make a valuable website very little code with sinatra (or PHP). That means we're there already :-). I know you're concerned with how to measure the length, and should it include libraries, but I think that if the abstraction suits your problem, it does not introduce unnecessary complexity into your code - and so I'm happy to consider it as part of the platform and ignore its actual cost. I'm not sure how to justify that, but it seems intuitively right to me (like not worrying how long code is in the implementation of arithmetic - I understand the IEEE fp spec is huge). Perhaps because that's the kind of library I'd write if I did the whole thing from scratch.
I think this is the nub of our impasse: I'm fixated on clean abstractions, and you're concerned about measuring program length. I think clean abstractions don't contaminate your code with complexity from another level - so you can pretend to start fresh in a new world. Ray Kurzweil suggests this happens with technological innovation, and even in biology e.g. tweaking DNA, cells, multicellular body plans, behaviour etc rather than inventing alternatives from scratch. Similar to SOTSOG.
A source of facts: ARM is a new stack compared to x86; and iOS is a new stack compared with desktop OS. Is there progress? They seem better (more efficient, smaller), but using the same fundamentals. Those Forth chips are totally different though.
BTW: do you have a link for the ideas about complexity from Kay at VPRI? I looked at their website, but couldn't see anything directly on it - and didn't want to download a bunch of pdfs to search for it. :-)
Is Pachelbel's Canon considered complicated as far as music performance goes? Reading that knocked me right out of the flow of this article. What's he thinking of? Maybe someone's complicated improvisations on top of the (really simple) canon?
On topic: the post didn't really cover where the layperson's base estimate comes from -- a layperson sees X pages, or X features, and multiplies that by... what?
The reference they use varies hugely, in my experience; many are trying to imagine what's involved in developing a "feature" -- which yes, will generally be a massive underestimate... not always, though.
There are also plenty of laypeople who imagine that everything they see is incredibly complex & difficult to create (I have a neighbor who imagined that writing software meant typing in the gunk he saw when accidentally opening a binary file in notepad), and all written from scratch -- so a simple blog would take ages to build, because it's obviously complicated stuff.
In real life, some quite complicated things come for free, other seemingly simple things take gobs of time, sometimes unexpectedly... it's a big mess. And just as some of the harshest wildcards (like nasty bugs in old browsers) are calming down, more form factors, interaction modes, etc. are popping up, so I'm not sure it's going to be a problem we can stop talking about anytime soon.
And wow, if you've ever been forced to play the part of anything other than violin -- I played the viola part innumerable times in HS orchestra, and the cello part was even worse -- it's almost a punishment.
Thanks for this, it's a great article. I have this problem with my boss occasionally, which is funny because he's a programmer. "Oh, you just have to do this and this, shouldn't take you too long."
You're right, there's always those bottlenecks that crop up that you didn't expect. It's impossible to truly know all the limitations of the software you're working with. I work with Drupal, and it has A LOT of those problems. Sometimes I'll think something will be easy to do, only to find out that the popular modules I thought would work great don't cater to the specific use-case I'm working on. Then it's back to schlepping.
As bad as it is for non-technical managers to estimate, I've found that in some cases, programmer managers can be worse, especially if you're new on the team. Why? They already know the code, so often their estimates state how long they would take to implement the feature. They either forget or don't realize that you have to spend additional time to learn how the code works and how to best integrate your change. They don't realize that sometimes it's harder to modify software than it is to build anew.
Very insightful, that's my case I believe. I had never used Drupal prior to starting at the place I am now, and part of every project so far has been finding out that Drupal does not, in fact, do what I want. Luckily my boss is a good guy, so he's understanding when I don't estimate correctly.
Really interesting point here. Although I'd like to point out that I don't think it's necessarily being technical vs non-technical that makes it hard to estimate. There are probably non-technical people out there who have managed a lot of software projects and have developed solid intuition for how long projects will take.
The real divider is experienced vs. inexperienced. But that's just a guess - I've never been managed by someone non-technical.
Want to add two other reasons we're horrible at making estimates.
1. Expectations.
2. No widely accepted language for writing requirements.
Re: 1. Expectations are just hard to manage. Even when you do get "So the site's pretty simple" - the set of features in their brain is totally different from the set of features in your brain. Alas, it'll take more than "all it needs to do is X, Y, Z" to fix this process.
Re: 2. Is there a professional software consulting company that has solved this problem? I say there isn't. I also say that with the advent of Agile/Lean Startup methodologies, it'll probably never happen - "No, we don't write reqs, we have stories you need to approve." uh huh, okay. For writing requirements, I got really damn fond of Cucumber, but then the lean-ux took off, and that shifted everybody's priorities.
Bottom line, until there's a widely accepted way to communicate design and functionality, making bad estimates will always be easy to do
I think it's much more domain-expert vs non-domain-expert rather than programmer vs non-programmer.
I've regularly seen very experienced developers make exactly the same sort of mistakes when talking to people outside of their field (Sales, Design, etc.) because they miss things like:
* Sales is about building stories and relationships - it's not just listing features and meeting people.
* Design is about understanding the user, iterating, testing and experimentation - it's not just making things pretty.
I wish technical folk were naturally better at this sort of thing - but they (and I :-) keep making similar sorts of mistakes.
(It's also amusing to see the "it's not like building a bridge" analogy turn up. Go talk to some engineers, architects and builders - and find out how many of those projects ended up matching the original time and cost estimates :-)
> (It's also amusing to see the "it's not like building a bridge" analogy turn up. Go talk to some engineers, architects and builders - and find out how many of those projects ended up matching the original time and cost estimates :-)
That's an important point.
I used to work in sub-contract electronic engineering. We had a lot of information about how long it took to do things, and how much items cost. In theory, estimating the cost of projects should have been straightforward. And then you add 15% for wiggle room.
And still it's hard to keep things going out on time and on budget.
I very often hear this comment from my fellow developper colleagues.
In our startup, I am responsible for the frontend coding. Very frequently, I get requests from other developpers to implement this or this, and very often those requests are annotated by a "c'mon, it is just CSS, it'll only take you 5 minutes" when they have absolutely no idea what they are talking about, because they never ever did real frontend developpment. They simply think that everything boils down to HTML and that frontend is NOT really programming. Attempting to explain that a frontend also has logic similar to backend, and that no, plotting graphs when you don't even have the data modeled is not just a 2 liner with Google Visualisation.
I am always offended by people estimating time for work they never did. So I find myself very rarely giving estimations. Sometimes when I have a very precise idea on how to implement an idea right when I'm being told about it, I usually say "yeah this is definitely doable in a short time span". Meaning things that do not take more than half a day to implement, because you already compiled the code in your brain.
Now, when a colleague of mine estimates stuff for me, I tell them "No I don't think this is going to take X minutes to implement. If you think it is implementable so fast, why don't you go ahead and do it.". When they take it to the word, and I see them still struggling after many hours on some problem that are trivial to me, I will go ahead and help them out. After a few times, they usually won't bother me with estimations anymore... NOT.
"But it brings up another more interesting question: why does the way we naturally measure complexity stop working when we apply it to programming?"
Ah and I had high hopes for that next paragraph,
There are two fundamental differences between, lets say, estimating how long it will take to make a sandwich, and estimating how long it will take to create an online noticeboard for your village.
The first difference is that one task is manufacture, the second task is design. Writing code IS design, and it is inherently much harder (or even logically impossible) to estimate because implicitly you do not know what done means.
The second difference is that (broadly) how satisfactory a sandwich or other real world object is, is proportional to how long it takes to make. This is not true for software, because it is not a physical entity. It can be tripped up by a single incorrect character, and then, forget tasty - your software sandwich doesn't even exist.
A thought that occurred to me - another metric we use in the real word would be what I'm calling "solidness" or "weight." I.e. if you see a house and a similarly sized tent, the apparent difference we see between the two is weight - we can probably pick up the tent and walk away with it.
In a sense, that's a useful way to describe software to non-technical users. We can throw something quick and dirty together, but it would be light as a feather, and carried away by any kind of load whatsoever. Or we can build something more solid to last longer, but it should be clear that that will take more effort.
Yeah, let's just go with the quick and easy version. It's really all we need anyway, I mean it's not like we're going to release it as a product to the public. We just need a demo for the boss right now.
<three months later>
Why is that thing not working? I thought we finished it three months ago!
Let's take it a step further, shall we. Say you get this type of request thrown at you, which is obviously a mis-estimate and you don't end up delivering it with expected pace. You weren't supposed to, by the way. It was a trap. Now, the PM or whoever says this "Well, Joe, what if we punt it to Bob over there and see if we can get you concentrated on another problem X". Does that precipitate your balls into shrinking or what?
Estimates are just that, estimates and not contracts. We should expect them to have a float time, otherwise rename them to contracts. I do X in Y amount of time if Z is true. Usually, there are unknown variables and I'm running out of letters to name them all.
Furthermore, I don't work on a deserted island. I have a phone, QA keeps on asking me questions, clients want clarifications and my version control is burning up from constant merging, and yes I have included all that in the time estimate.
Any project management software out there that takes an ML / probabilistic approach to this? One thing I found at one of the mega-corps that I worked at is that both project managers and regular managers were blind to history. Note: this was before agile became accepted by mega-corps.
Manager: How long is this going to take?
Me: Virtually impossible to predict accurately
Manager: I need a date
Me: 3 months
Manager: That's too long, we need to get it done in 1 month
Me: OK, 1 month
... 5 months and several iterations later ...
Manager: Project has been deprioritized, we have a new project, how long is this going to take?
so reverse the question - "why does programming small things take so long?" (assuming x, y and z were relatively standard). is there a niche in this somewhere that maybe some % of the guru class should be focusing on 'making typical, trivial'? ie, we found 70% of websites include these features - download this template, push run and you have a website running. change text on front page manually might actually be good enough for 20% of the population...
Because "x, y, and z" are "standard" only in the "the great thing about standards is; there's so many of them to choose from" sense of the word.
Recent example. Client: "We just want an online shop" - great clickety click, Magento is installed, stick their logo in, tweak the stylesheet to match, call it "done".
Over the next few weeks/months:
"Where's the blog?"
"How do I enter auctions?"
"I want to display products with individual pricing, but sell in cases of a dozen."
"My SEO consultant says I need some 'landing pages' - where do I make them?"
"This doesn't sign people up to out MailChimp email lists!"
"It doesn't work right on my wifes iPhone."
Discuss needs with client, install various plugins, a wordpress blog, tweak the templates to make single-pricing/bulk-selling "work" for boxes of 12 (and kick myself knowing it'll bite me in the ass one day doing it this way…)
Then:
"How do I make the auction extend the time if someone bids in the last minute?"
"How do I enter products that come in boxes of 6 or 8 instead of 12?"
"It works on the iPhone now, but not on a friends 2 year old cheapo Android 1.8 phone!"
Then:
"How do I auto calculate freight costs based on both weight and cubic size, for orders with multiple products?"
(at which stage I start explaining combinatorial complexity and the knapsack problem to the client, and run a warm bath and remind myself "down, not across…")
99% of the time what the client want is "similar" to the other features there are already out there not "exactly the same".
100% of the time they want something that isn't on the standard template that is a huge pain to work into the template you choose from, while it may be alot easier and faster to do it form scratch.
120% of the time the update of the template you choose will break your modification and you have to choose between having a non secure website or fix your modifications (every time)
Template and CMS only work if you know what you are doing and you understand that you are compromising your ideas to fit into the Template/CMS you choose.
No shit. It's always like this. They have a wish list and a budget. These two things hardly ever match.
It's hard to give a quote when there are no detailed specs yet. Most clients still want a price tag. Of course afterwards they'll nail you on this premature conjecture.
As a business your quote needs to be competitive and still be profitable.
From experience, we usually throw things out and create boundaries in our offers / quotes. We've learnt the hard way that often times clients implicitly assume certain features to be there without ever communicating them to us, so you need to think ahead and have it in black and white that if they want X they have to get another offer / quote at later stage.
So imo it's not just about what work a quote includes, but also what work it does not include.
Such a noob topic. If you don't know what you are doing, just admit it.
For seasoned programmers and PMs, this is a non-issue. Perhaps you are in the wrong profession if you can't handle the demands without whining like a brat.
non-programmers, in my experience, tend to think software is "just that stuff you see on the screen". So if they create some screen mockups of the UI, they think the programmer pretty much needs to sit down at the computer, and do some simple keyboard/mouse clicky-clicky to make the screen match his mockups. and then they're done. To them, it's just visual, and static. They don't understand there are processes, threads, hardware contraints (cpu, memory, disk, network bandwith), libraries, languages, configuration, edge cases, error conditions, etc. I've seen this kind of perspective a lot with non-technical stakeholders, especially first-time clients of software contractors, or first-time managers of software developers.
"Just make it match these mockups!"
Dragging a little Facebook icon into place and poof all the code and edge case handling and database work needed to integrate with Facebook is all done, put to bed, etc. Just put that icon in the right place.
Phase I: (Idealism) Treat building software like building a bridge. Analysis --> Design --> Development --> Testing --> Implementation.
Phase II: (Pragmatism) Realize that Phase I doesn't work. Try to figure out why. Decide that the weak link is Analysis, i.e. we're no good at estimating and hitting deadlines because we never have good enough specs. Devote your life to the art and science of getting the perfect spec. Implement the methodology du jour to make that happen.
Phase III: (Realism) Realize that Phase II doesn't work. Decide that no existing Analysis methodology works because no one knows how to conduct analysis, no matter what the methodology. Focus on optimizing the practitioners with better education and 10,000 hours of real-world experience.
Phase IV: (Enlightenment) Realize that none of the other phases works because we've been focusing on the wrong constraints all along. Instead of optimizing scope, quality, or quantity of the product, optimize the schedule. Adopt a new model: define the deadline first, then build whatever you can by that deadline. Hit your deadlines every time, not by estimating better, but by not caring how pretty your deliverable is. Eliminate the problem of estimating by making the deadline a constant. Eliminate the problem of analysis by replacing it with prototyping and revising. And eliminate the problem of blueprints by adopting a strategy of inventing, not engineering.