I♥Sketch has been getting a lot of attention on design blogs this last week, which is hardly surprising because the immediate reaction from anyone whose work involves translating their 2D sketches into 3D models is “I’ve got to try it!”. Unfortunately there’s no word yet of when this might be made publicly available, but the video below, and the paper (pdf) due to be presented at UIST08, give a good idea of how the system works. But whilst it’s designers who have got excited, my interest is also in whether I♥Sketch has implications for how non-designers might interface with CAD systems
I♥Sketch © Department of Computer Science, University of Toronto
In their paper to UIST08, I♥Sketch’s developers make clear that the software is not intended as an aid for non designers:
“Our chief design goal is to wean designers from physical pen and paper with a similar 2D sketching interface and subsequently ease them into a 3D environment where their 2D sketches transition into 3D models used in product design…It is important to note that ILoveSketch is intended for professional designers who are willing to invest some time learning a system in return for improved workflow later on, rather than for walk-up-and-use casual users.”
Nonetheless, it is obvious from the video that for a person with some sketching skill but no 3D CAD experience, I♥Sketch could present a significant reduction in the learning curve needed to get an idea into 3D and thus into rapid prototype or production. It doesn’t attempt to ‘help’ the user to design (more on that below), but with future (already hinted-at) developments, it could be an extremely powerful tool for non-designers to visualise the 3D reality of their 2D sketches.
I♥Sketch builds on a significant body of previous work investigating ways of translating 2D sketching into 3D curves and models. The UIST08 paper contains 44 references, and acknowledges the extent to which the system is built on earlier work. However none of the existing approaches have had much success in terms of acceptance by the design profession. The reasons for this, according to the authors of the paper, is that previous systems have focussed on “specific curve interaction techniques” rather than the designer’s workflow, and have failed to understand the designers sketching process and practices.
I♥Sketch has two distinct components: navigation, and curve sketching and translation. Navigation is carried out through a combination of pen gestures and button input (the left hand of the user in the video). Curve sketching is entirely pen-based. At times the two components interact as the software deliberately rotates the view to what it thinks is the best drawing angle, based on the last sketch stroke of the user. The overall feel of the system is based on a physical paper sketchbook, which provides a number of metaphors for the software, such as tearing off a page to discard it, and scribbling to erase a line. At times some of these metaphors seem a little forced, and I could imagine becoming tired of having to drag from one corner of the tablet to another (for example) to review sketches, rather than pressing a single key or clicking an arrow icon. But this is a minor point, and one which could probably be easily addressed in future.
Sketches are reviewed by dragging across the ‘sketchpad’ © Department of Computer Science, University of Toronto
It’s when you come to watch the way that I♥Sketch generates curves that you appreciate how the developers have attempted to understand the way that designers sketch, and how sketches are built up from ‘exploratory’ marks on the paper. I♥Sketch uses what it calls ‘multi-stroke curve sketching’, in which a final curve is built up as the product of a number of earlier curves. This simulates the designer making faint guide curves, gradually darkening them as the desired shape becomes apparent. The software uses an ‘ink drying’ metaphor – whilst the ink is wet the curve can still be modified, but once dry it has to be kept or erased (shown at 00:28 in the video). Single complex curves are created where the designer draws to connect two existing curves, and splines are automatically created if one curve is drawn tangent to another (00:44 – 00:54 in the video).
multi-stroke curve sketching © Department of Computer Science, University of Toronto
automatic spline creation © Department of Computer Science, University of Toronto
Understanding how curves drawn in 2D are represented in 3D space is probably the most complex part of the I♥Sketch system. There are five different ways in which this can happen. The first two are designated as two-curve methods:
- the designer draws two curves, either side of a given plane; the software assumes these to be symmetrical and creates two curves each the ‘average’ of the two that are drawn
- the designer draws one curve from one view point, then redraws the same curve from a different viewpoint; the software then interpolates the two lines into a single 3D curve
Once the designer has created a curve, three further techniques (sketch surface methods) are possible. These are all essentially similar to ‘sketch-on-curve’ techniques with which CAD users will be familiar, in which a plane or axis system is defined at a certain point on a curve, and subsequent curves are drawn on the plane or planes. I♥Sketch achieves this through an ‘axis widget’ which can be placed on any existing curve and oriented using a set of pen gestures.
Axis widget placement on a curve and curve intersection © Department of Computer Science, University of Toronto
To test the viability of the system I♥Sketch used a professional industrial designer, Calen Whitmore. Whitmore received a one-hour instruction session, before spending 1½ hours testing all the features of the software. Once accustomed to working with the system, he spent ½ hour modelling the jet fighter shown in the video, and then 2½ hours designing the car shown below. Whitmore’s detailed feedback is included in the UIST08 paper, but in general felt the system was much faster at generating good curves than traditional software, allowing more time to think about the design rather than how to achieve the design. One distinct negative was the feature which automatically rotates the sketch to the ‘best’ drawing position (similar to how a designer might rotate pad of paper). Whitmore asked for the time delay before rotation to be increased, but even so still gave this feature the lowest mark possible. I can also imagine that this feature, though well intentioned, would be extremely annoying, a bit like someone rotating my sketchpad without me asking, because they thought they knew the best position. Presumably a future development would allow this rotation to occur only after the designer has explicitly requested the action.
3D sketch © Department of Computer Science, University of Toronto
Whitmore’s frustration with the automatic rotation feature demonstrates the fine line that has to be negotiated when deciding how software should make things ‘easier’ for the user. Most computer users have been in the situation where we know exactly what we want to achieve, but ‘user friendly’ software corrects our actions because it assumes we have made a mistake (I find Microsoft Word particularly infuriating in this respect). But the multi-stroke curve sketching is also an example of the software ‘guessing’ what the user is trying to accomplish, and this received top marks from Whitmore. In terms of I♥Sketch being used by inexperienced or non-designers, it seems to me this may be one area that could be developed. Those who are not confident sketching tend to produce hesitant, less fluid strokes than those who are. It could perhaps be possible to generate algorithms which convert these hesitant lines into more flowing ones, or perhaps suggest possible curves to choose from, based on the user’s attempts. Of course this isn’t going to turn a poorly conceived idea into a good one, but it may allow a poorly drawn idea to be realised in 3D.
Some other possible future developments are hinted at in the UIST08 paper, which refers to surface modelling techniques such as FiberMesh. A project run by the Computer Graphics department in the Technical University of Berlin, FiberMesh generates surfaces directly from sketched curves. The original curves remain on the generated surfaces, allowing further manipulation of geometry, and a range of basic modelling operations such as cut, extrude, pull etc are available.
Sketching operations (from top to bottom): creation, cut, extrusion, tunnel © Computer Graphics Department, Technische Universität Berlin
I♥Sketch already generates files in .igs and .ma (Maya) format – the Maya files are available to download from the I♥Sketch website. According to Paul Salvador, who posts on the ProductDesignForums site as zxys, these Maya curves are very clean and allow surface patches to be easily constructed, as demonstrated in his model below. A system which generated surfaces automatically as Fibermesh does, would greatly enhance the level to which the 3D curves could be understood, furthermore it might allow a surface itself to be selected as a drawing surface, allowing curves to be drawn directly onto a curved body.
Surfaces created from Maya curves © Paul Salvador
One aspect which appears to be missing from I♥Sketch, surprisingly given the extent to which the developers have tried to understand how designers work, is the ability to use one sketch as the underlay for another. It seems that this should be possible – the description of Calen Whitmore’s design process of the car details how four circles were permanently displayed to represent the wheels, giving a sense of size and proportion. However the notion of one sketch being traced as the basis for a design iteration doesn’t seem to be implemented currently.
Exactly how useful this kind of system would be to someone not trained as a designer is open to question. It’s fairly obvious that, even though the need for expertise in 3D CAD systems is removed, it’s replaced by the need to be able to sketch confidently and economically. Sketching seems at first to be easier than operating CAD software, but it’s nonetheless a skill which few people get really good at, and it may be a dead end to assume that just because an interface is easier to understand, it follows that it is easier to achieve satisfactory results. This is the mistake that Neil Gershenfeld makes in Fab when he writes
“… CAD tools still share a particularly serious limitation: they fail to take full advantage of the evolution of our species over the past few million years. The human race has put a great deal of effort into evolving two hands that work in three dimensions… A frontier in CAD systems is the introduction of user interfaces that capture the physical capabilities of their users… The most interesting approach of all is to abandon the use of a computer as a design tool and revert to the sophisticated modeling materials used in a well-equipped nursery school, like clay.”
CAD systems take advantage (not perfectly, I admit) of what computers are good at – fast, accurate computation and simulation. Yes, it takes a long time to learn how to use CAD, but it also takes a long time to learn how to sculpt in clay, and get exactly what you are after. The “physical capabilities of their users” might simply not be able to make the 3D form in clay, at which point you end up with an unsatisfactory product and the need to use a computer. The same could be true of I♥Sketch: without significant ‘help’ from the software, a user might end up just making marks on a screen, rather than creating a satisfactory (to the user) design.
Which brings me on, finally, to a recent development by Ponoko, called Photomake. This system allows a 2D sketch to be used as the design drawing for a product, which Ponoko manufactures by laser cutting a material of your choice. The 2D sketch has to be photographed or scanned, then uploaded to the Ponoko website, where software will interpret the design and create a final design. Essentially the software ‘vectorises’ a bitmap image, but anthropomorphising the process allows it to be thought of as the software deciding what the drawing really means. This is similar to what happens when one designer looks at another designer’s sketches, and it can be amazing how designers, who have been trained to ‘speak’ the same language, can read each others drawings, whereas a non-designer will miss much of the information, or else place too much importance on a non-essential detail. It seems to me that unless non-designers are expected to become proficient in either sketching, or some kind of 3D modelling, the only way for them to satisfactorily communicate their design intent will be the development of software which is able to interpret their wishes. This software may have to guide them, and it may insist they are not able to do certain things, but it’s job will be to do what Photomake does, except with three dimensional designs.
Photomake © Ponoko