TransWikia.com

To what extent should you program by "sketching"?

Software Engineering Asked by Adam Zerner on October 29, 2021

In Hackers and Painters, Paul Graham talks about how he "sketches" as he programs:

For example, I was taught in college that one ought to figure out a program completely on paper before even going near a computer. I found that I did not program this way. I found that I liked to program sitting in front of a computer, not a piece of paper. Worse still, instead of patiently writing out a complete program and assuring myself it was correct, I tended to just spew out code that was hopelessly broken, and gradually beat it into shape. Debugging, I was taught, was a kind of final pass where you caught typos and oversights. The way I worked, it seemed like programming consisted of debugging.

For a long time I felt bad about this, just as I once felt bad that I didn’t hold my pencil the way they taught me to in elementary school. If I had only looked over at the other makers, the painters or the architects, I would have realized that there was a name for what I was doing: sketching. As far as I can tell, the way they taught me to program in college was all wrong. You should figure out programs as you’re writing them, just as writers and painters and architects do.

On the other hand, I’ve seen a lot of contrary advice saying that you should plan stuff out and think things through before you start coding, particularly in the context of algorithm questions for coding interviews.

9 Answers

The advice he got in college was probably about working with the outdated computing systems of the time, and not about how you should or shouldn't write programs. So he's essentially saying that a modern IDE on a fast machine is great. Which is true, but not all that interesting.

He would have started Cornell in 1982(?), in Philosophy but probably played with computers that early. Around that time, a typical Com Sci program may have been using punched cards, or more likely a line-editor on a time-shared machine. Cornell's site says their first CS machine was a PDP-11/60 in 1977. The only access would have been in a crowded computer lab. Instructors' advice would be based on older set-ups.

Inability to "sketch" using punched cards is obvious. Line editors of the time worked like the Console does today, but worse. There was no "line history", no autocomplete, no syntax highlighting, no mouse or cut/paste. Editing live was dreadfully slow. Sadly, it was much faster to hand-run code in your head, visually double-check syntax, and fully arrange code on paper before entering and running. When the system hung or got stuttery, you had nothing better to do. The advice about pre-planning was excellent practical advice, given that tech.

When he writes "I found that I liked to program sitting in front of a computer", maybe he had an Apple-II at home, or maybe Harvard Master's students had a fast, dedicated mini-computer with vi, or maybe he's confusing his undergrad years with later. He's may have thought his instructors were looking at a fast interactive environment and saying "paper is still better", but very unlikely.

Answered by Owen Reynolds on October 29, 2021

I clearly remember that I definitely did this when I started to program. While I was young and fresh and was just getting to know the world of code, this was what I did most of the time.

But as time went on and I gathered experience, I started relying on this less and less. To the point where I don't do it at all today anymore. The reason is that with practice I gained the ability to imagine that "sketch" in my mind. Before I start coding a task, I already have that "skeleton" in my mind, so I just write it down from one end to the other with all the "fleshy" bits in place from the beginning.

Occasionally this can even mean writing 1000 or more lines of code before running it for the first time. And - to my own surprise - it almost always works, with just a few typos and oversights to fix afterwards.

There is a drawback however. Sometimes the task is so large and tangled, that the skeleton does not easily fit in my brain. Then I can spend days trying to "crack" the problem before realizing that I'm stuck. What helps then is really getting the paper out (or more often a digital equivalent) and writing it down. What doesn't fit in my brain, always fits on the paper. After that it's the same - just write it out and fix typos and oversights.

So, my advice/experience is - do think it through; have a rough idea what the code will look like; what the algorithm will do - but it's not always necessary to literally put it on paper and figure out every tiny little detail. Just be confident that you'll be able to fill in the blanks when the time comes to write them.

Answered by Vilx- on October 29, 2021

This phenomenon is called survivorship bias, with the part about highly competitive careers being especially relevant. In particular, to correct Paul Graham on this:

As far as I can tell, the way they taught me to program in college was all wrong. You should figure out programs as you're writing them, just as writers and painters and architects do.

No, I would simply argue that true reason he can "sketch" almost entirely as he programs (i.e. figures out programs as he writes them) is that he is Paul Graham. In much the same way that you don't just start learning calculus at 12 and become Einstein. Apart from that, the rest of the answers provide some excellent advice on the overall advantages of mixing both the "plan-ahead" and "retrofit-as-you-go" methodologies.

There are "fail-safe" ways to increase productivity on average and this the actual purpose of formal education. Catering for talented individuals will always have a lower priority (though not too low, of course) than the general public, simply because there are too few. And talented individuals often think of themselves as nothing special. Well, usually, they're wrong. And, sometimes, their personal experience does not apply to everyone, not even the majority, especially when it contradicts what has been verified to work better on average. In a few words, you just cannot extrapolate from outliers.

When was the last time you tidied up your desk/room? If you are/were one of those people that "thrive in chaos", always programming among a holy mess of things unimaginably irrelevant to a desk... would you suggest that all these teachings about keeping rooms/desks neat and tidy were wrong?

Answered by Vector Zita on October 29, 2021

Not all programs are the same.

Strategies that work for programs that search a problem space for one correct answer are different from enterprise software that's heavy with ever-changing business logic. "It depends" is never a satisfying answer, but the bottom line here is that there's no answer to your title question without looking at specifics.

Conceptual work for conceptual challenges.

Paul Graham's college professors were probably talking about programs that embody algorithms, with little to no emphasis on user interaction or external needs. Computer Science, distinct from Software Engineering, is arguably an applied form of mathematics. Just as you can think about sets and logic before picking up a pencil to write a proof, you can have an entire algorithm shored up before writing a line of code that reifies it. "Sketching" may help you to think through the ins and outs of the algorithm, but ultimately the program can't be valid until the full thing is understood.

Changing requirements demand flexible implementations.

On the other hand, software that caters to the unstable needs of a business or allows heavy user interaction (which will always have room for improvement) simply cannot be solved all at once. By the time you get done with it, it's already wrong! This makes the iterative approach a natural fit for the problem.

Answered by TheRubberDuck on October 29, 2021

Designing a program before you write it is not necessarily a bad thing.

Expression of design

The problem is that designs have to be expressed somehow, and programmers are generally not equipped with any better way of expressing a design than with the code itself. Code is directly executable by computer, and communicable to other programmers, because it is standardised and rigorous.

And, because program designers are invariably programmers, and because the facilities and functionalities offered by a programming language often determines the general philosophy and idioms of a design, a program design is often tightly coupled to the programming language it will be implemented in, and the designer usually has to know and have learned that language at a minimum.

So it is rarely if ever the case, that program designers are equipped with any better way to express their design than by coding it in the target language.

I think perhaps 30 years ago, this wasn't the case. Programmers were often still using older languages like Cobol, on much older development systems, where various pictorial representations on paper were thought to be more succinct and user-friendly than navigating thickets of code on text terminals. Better programming language design and better development environments have long since solved these problems.

Design experience

Another aspect to consider is that being proficient at design requires experience (or much more training than the norm), both in programming generally, and in the subject which is being computerised. An experienced coder dealing with a settled subject may be able to design a program on paper like baking a cake, according to established architectural themes or with minor variations.

But a student will rarely be able to do this - it's something he may be able to aspire to in his career. And even coders with several years experience may be tackling applications of such complexity, unfamiliar technologies and platforms, and unfamiliar subjects, that they cannot hope to conceive a full design up front, and a certain amount of trial-and-error may be called for.

Writing things down is often not just a record of thoughts, but a tool of thinking. With code, writing it down is also a way of enabling it to be subject it to computerised testing, and not just intellectual testing.

The source has flatteringly referred to this trial-and-error process as "sketching", similarly one could call it "sculpting", as if one starts with the clay and a clear overall vision, and one progressively and elegantly brings out the detail. In reality, it's often more like learning to sculpt for the first time, with plenty of rework and wasted clay.

I remember watching something on telly a while ago, where the clay sculptor neared completion of a human figure, with skin creases and hair detail already added, but found that it could not fully support its own weight - so he then had to hack out the entire backside of the body like a macabre autopsy, to retrofit an iron skeleton from head to toe, then infill again with clay. This sort of radical rework results virtually every time a programmer encounters a subject he isn't intimately familiar with.

In fact both sketching and sculpting are not appropriate metaphors. The real complexity of a computer program arises from its dynamics and moving parts, or the length and complexity of data processing pipelines, which neither sketches nor sculptures have. If we must use a metaphor, we should be honest and use one like combustion engine design, or other enormously complex and subtle machines, more "mechanical prototyping" than sketching.

Excessive amounts of prototyping behaviours amongst programmers are not necessarily bad practices in themselves, but a symptom that they are routinely exceeding their own competence and understanding.

From the point of view of software management (or software educators), it's not just a case of telling programmers or programming teams to design up front. The industry itself must have invested in a proper science of software design, and then bequeathed this to the software designers.

Answered by Steve on October 29, 2021

It's important to realize this question of how to approach the software development process is not only about what software is being written but also who is writing it and in what environment. I think the reason there are so many different opinions on how software development should be approached is because different methods work best for different people. Different people find different systems to work better for themself and how they think. This may explain why schools focus so much on the planning ahead aspect. That's how many professors like to approach problems so they (understandably) think everyone should do it that way.

I happen to really like the "figure it out as you go" approach, but I know quite a few people who would be too afraid to start on a project if they don't have a clear plan going in. Making a clear plan is the thing that'd kill my motivation and ability to think outside the box. Sometimes I'll see the need for some planning, but I try to keep it to what feels necessary in my circumstances.

Overall, I like Paul Feyerabends advice from Against Method on this matter (he's not specifically talking about software here, but rather progress of any kind in any field):

The only principle that does not inhibit progress is: anything goes.

With that in mind, I'd recommend you try some of the contradictory advice you hear. But then think about what each method did for you and I imagine you'll be able to take things you like from each approach and find what is right for you. You may need to adapt when working in different environments like on a team (ex. I've found planning is a lot more valuable when on a team) but the key is to experiment and be willing to try advice. Know what works for you, but be willing to adapt to what you are working on and to the environment you find yourself in.

Answered by addison on October 29, 2021

TL;DR

  • Both methodologies have real world applications.
  • Both methodologies can be overapplied and lead to inefficient results.
  • Paul Graham is focusing solely on newbie programmers, overstating himself, or he's overapplying his methodology to the point of being detrimental.

This is classic agile vs waterfall

Agile and Waterfall are two development ideologies that are mostly orthogonal to each other, but they are both valid ideologies in their own right.

One ought to figure out a program completely on paper before even going near a computer

This is waterfall to a tee. You do one thing until it is completely finished and should not be revisited ever again, and then you do the next thing.

Note of course that real-world waterfall still allows for error correction (no one is that perfect), but the point is that waterfall assumes that what you are building is exactly what you will end up needing.

Agile, however, is born out of the realization that when doing waterfall, your assumptions about what you will end up needing are often wrong enough that they cause more problems than they solve. For instance:

  • You may have overengineered something and wasted time on doing so, also making the rest of development harder to work with this overly complex implementation.
  • You may have underengineered something, and because you assumed you were building the right thing, you coupled things too tightly and are now force to make heavy breaking changes.
  • The customer has seen our demo and has tweaked the requirements (added/changed/removed some); which inherently means some of our assumptions about what we would end up needing are no longer correct, and all logic that depends on these assumptions needs to be revisited. If we already overengineered things, that becomes quite the time sink.

The main takeaway here, if you're working agile, is that it would've been better if you had not assumed anything that you didn't need to work out yet.

Fulfill the requirements you were given, but nothing more. Don't make a framework out of a small helper class. Don't implement the entire data structure from the get go.

Agile expects you to revisit and rework/expand it at a later time, trusting that your assumptions today are more error-prone than your better-informed assumptions tomorrow.


Everything has a drawback

Both Agile and Waterfall have their uses, I'm not telling you one is better than the other. But too much of anything is not good, by semantical definition of "too much". Agile and waterfall have different "too much" scenarios.

If you over-apply waterfall, you can run into issues because:

  • You've needed to pre-emptively discuss so many things, making the discussing theoretical, abstract, and exponentially more complex with each passing day. As the complexity of the analysis grows, people struggle to communicate clearly more and more.
  • You're having to make so many decisions that you end up with a bout of analysis paralysis, where you feel unable to make a decision and therefore waste time deferring that decision to the future.

However, when you over-apply agile, you can run into issues because:

  • Developers are no longer making any reasonable long-term considerations, which may have led to clean coding practices slipping.
  • Developers don't analyze their own tasks anymore, resorting to shotgun debugging and brute forcing their solutions.

Neither list is intended to be a compelete list.

In either case, the end result is lowered efficiency. Both Agile and Waterfall have a (different) sweet spot for efficient development, and over/underapplication leads to missing the mark and thus being inefficient.


Paul Graham's approach

Now I want to bring Paul Graham's quotes back into the spotlight:

You should figure out programs as you're writing them, just as writers and painters and architects do.

He's not wrong here, if you follow the agile methodology.

If you've ever watched a Bob Ross video, he's the artistic equivalent of agile. He decides where the trees go after he paints the mountains. He doesn't know what the picture will look like before he starts painting, other than a very vague "winter scene" or "seascape". Everything else gets filled in as he goes.

I tended to just spew out code that was hopelessly broken, and gradually beat it into shape. [..] The way I worked, it seemed like programming consisted of debugging.

This is one bridge too far, in my opinion. Unless Paul is talking about his very early days of a newbie programmer (who generally always output broken code on the first pass) or simply expressed himself too strongly here, this is starting to sound like shotgun debugging, which means he took his "act before you think" approach too far and made it more inefficient than it could have been.

Just to be clear here: I'm not advocating a zero tolerance for shotgun debugging. When all else fails, shotgun debugging will always be there as a last resort. But shotgun debugging is inefficient and slow, and you're often better off taking a step back and looking at what you want.

Sketching (the literal artistic definition) isn't the final product but it does imply that you are thinking about what you'll be doing. But sketching is not "gradually beating it into shape", as Paul describes his programming style.

The artistic equivalent of "beating it into shape" would be repeatedly drawing something badly, erasing (part of) it, and trying (that part) again. Paul seems to imply that sketching is "expected failure", which it really isn't.

Sketching is still a thoughtful process of reasonable approximation, but it avoids labeling itself as final and instead keeps itself open to alteration if needed.

Shotgun debugging is valuable for learners, as it teaches them the common mistakes that they should learn to avoid in the future, but that is precisely the point I'm trying to make.

A newbie artist doesn't sketch. They paint the whole picture, fail, and then paint over it. It is only when they start to gather enough experience to know how to (not) paint a picture that they start sketching specifically to avoid that try/retry process.

Sketching is what you do to avoid shotgun debugging. Shotgun debugging is not a form of sketching, it's what happens when you don't sketch.

Which brings me to my final point:

Debugging, I was taught, was a kind of final pass where you caught typos and oversights.

I said that sketching is a reasonable approximation which keeps itself open to alteration if needed. The kind of alterations you need to make to a sketch generally amount to the equivalent of "typos and oversights". If you need to redo your sketch from the ground up, then your sketch must have been really bad or misguided. That's just not good sketching.

While learners should shotgun debug to learn the source of their mistakes, any experienced developer, by their very nature of being "experienced", shouldn't be continually revisiting the basics during their debugging phase.

When you're no longer a newbie programmer, debugging is in fact "a final pass where you catch typos and oversights".

Answered by Flater on October 29, 2021

"Sketching" and "Upfront Design" don't necessarily contradict but complement each other. They are somewhat related to bottom-up and top-down approaches. It's necessary to have a big picture of what you're building, but the building process can be made more efficient if you work with building blocks that were developed bottom-up.

Just as a composer may start out with a motif and use that in a composition, or a painter does studies of body parts to later integrate into paintings, a developer should be able to pick from a repertoire of building blocks and construct new blocks for specific applications.

With test driven development, this is actually encouraged as a coding practice. In some sense, TDD is similar to the "beating code into shape" that Graham mentions.

Different languages support bottom-up development more or less. In general, languages such as Lisp, Smalltalk-80, Python etc. that provide an interactive environment encourage tinkering with code under development much more than statically compiled languages, even though modern IDEs and fast incremental compilers blur this distinction somewhat. In most situations, you will still be able to create building blocks as libraries to be used in your final application, although you should expect to work in parallel on library and application as application needs will determine what the library should provide.

Answered by Hans-Martin Mosner on October 29, 2021

What they teach you in school is not all wrong, it is just one verifiable way to teach you to think things over and not rush to the first goal you can think of.

The sketching process you describe using your code editor is basically the same as writing out Nassi-Schneiderman diagrams upfront on paper. The point is to not skip the planning phase and to remain critical about your work as it emerges.

An indicator for me whether you are doing it right or not would be when you try to run your code for the first time. I would be suspicious about your effectiveness if that were really soon, before there was any shape to your work. Debugging should be for catching some glitches, not for shaping.

It is like composing. In music, the way to go is not to hit a random note and than try to find a next one that goes well with the first. You should have a structural idea of the piece before you pick up an instrument. Picking up an instrument is like hitting the run button. A good programmer typically does not do that before the structure of the program is complete. There may still be blanks to fill in but the shape would be there.

Answered by Martin Maat on October 29, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP