Archiform 3D
      -  3D Techniques & Technology   -  3D Renderings for Architecture, Historical Perspective

    3D rendering actually came out a lot earlier that most people know. The adaptations that you see today evolved from as early as the 1970s. The photo-perfect 3D renderings you have become accustomed evolved through new technology, faster computers and a new breed of artist.

    The Beginning of 3D

    3D on computers, as mentioned above, started a long time ago. But it wasn’t renderings that were being produced, it was wireframes. A computer, the fastest in those days, could plot a 3 dimensional wireframe object and display it on the screen. As the hardware became faster the operator was able to move the 3D object around.

    From then came hidden line removal, where the parts of a wireframe model that would not normally be seen were removed and a more accurate shape was displayed. Soon after came perspective, which gave the 3D object “vanishing points” and created the look of real 3D depth. Still, there was no colour but the lines plotted by the 3D computer model were great for working over the top of. The process allowed an artist or illustrator to create a wireframe view, find a pleasing angle to present and then generate a hidden line perspective view for the enhancing with traditional pen or paint based techniques.

    Coloured faces in 3D views soon came about although they looked flat and lifeless as they lacked shading, texture, specular, illumination and all the other traights that a real-life surface has. All of these steps we consider to be before the actual term 3D Rendering could be used, as they required significant manual effort outside of the computer to create any kind of artwork or illustration that was pleasing to the eye. The “3D” was there but the “Rendering” was still being done by hand.

    The first 3D Renderings

    Computer programmers were working hard towards achieving a realistic view from the computer alone and advances in hardware performance were allowing more complex calculations. The term “3D Rendering” came about when processes such as “Phong rendering” and later on “Ray Tracing” came about. These techniques used the 3D model as a basis but didn’t approach the coloring or 3D representation in the same way. They calculated the way that light hits a surface and how the viewer would perceive that light, which is based on real-life. The process, to the technical, is actually reversed in that it starts the calculation from the viewing point and works backwards, but the result is the same.

    The first 3D Renderings were very basic indeed, in fact they weren’t much better than the previous coloured 3D faces, but they improved. Soon some basic 3D Rendering techniques and technologies came into play, such as:

    • Texturing. Applying a photographic surface to a 3D model, therefore creating a realistic surface. 3D texturing is now a very specialized task with many many parameters to deal with.
    • Transparency. Put simply, you can now see through windows.
    • Shadows. Calculating the shadows cast by the lights in a scene and displaying them correctly. The technology of shadows alone could have a page written about it.

    Each of the effects above caused a 3D Rendering to take longer to process – hours, days or weeks. But today’s computers are fast enough to handle them with ease, so now we have new techniques that push our current computers to the limit.

    3D Rendering in the Movies

    All computer geeks know of the 1982 movie “Tron”. It pushed CGI (Computer Generated Imagery) to the limit. While very simple in today’s terms it was groundbreaking then by being the first real CGI 3D Rendered film and by incorporating was was advanced techniques such as transparency.

    Tron inspired many artists, illustrators and computer nerds to enter the 3D Rendering field, although we quickly learnt that it wasn’t an easy entry. Tron was great because it didn’t try to be anything else other than a computerized environment in a Sci-Fi film, which is how it was pulled off. Mainstream artwork was a completely different story. And 3D Rendered animation was even further away.

    Advances in 3D Rendering technology

    Hardware was the biggest issue for getting realistic 3D Renderings on time and to an acceptable quality. If you wanted a nice looking rendering then you needed more processing power. You could wait days for finished artwork and then face doing it all again if you needed to make an alteration. The speed of hardware was critical and the cost of anything that was nearly fast enough was prohibitive. But computers got faster and the offerings to main stream business, artists and home users improved.

    Steve Bell, chief of Archiform 3D was an uptaker in 1986 of the Atari 1024 STF. This tidy box had a whole Megabyte of RAM and run at a blistering 8 Mhz. It was ahead in RAW processing power, memory capacity and graphic ability. In comparison, Steve’s G5 RISC workstation today (2005) has two processors running at 2500 Mhz, 2500 Megabytes of RAM and two large LCD screens. But the Atari, among other new offerings brought CAD and 3D to the average person. It was slow, unimpressive and difficult to learn, but it had potential.

    In the early 1990s Steve was producing 3D Renderings professionally, some being close to photographic quality, while most were still obviously 3D (this was not considered detrimental in all cases).

    Today’s computers still have some way to go to make 3D Rendering an effortless, fast process, but they are on their way. We are now able to link many computers together to create a “Render Farm” that splits the load of generating artwork into smaller pieces across many CPUs.

    Advanced 3D Rendering Techniques

    The advances in 3D Rendering hardware allowed new techniques to enter. Without these new technologies hardware would have caught up on the needs of the artist now, but the desire for pure realism dictates that we need to push the limits of hardware and software constantly. Some 3D Rendering techniques and technology that evolved are:

    • Radiosity. In order to get a true representation of light you need to calculate indirect light and the light generated from basic surfaces. Even flat surfaces reflect light. Without writing a chapter on Radiosity, you may simply think of it as the calculation of indirect, reflected light in a scene, not just as intensity but colour. Radiosity requires immense computing power but creates superb 3D renderings.
    • Caustics. Another complex technology, but the simple description is that it calculated scattered light. For example, you may have played with a prism and seen how it splits light up into colours. Caustics performs the same calculation and also requires immense computing power. While spectacular in it’s effect it isn’t used often in the architectural scenes we create.

    Today’s 3D Artists

    Today’s 3D Rendering artist is a specialist. Like all new computer techniques it was pioneered by enthusiasts but it has now become a real profession. People train at university in this field, which has grown to have many different subsets. In the 3D Rendering CGI field alone you have:

    • Animators
    • Modellers
    • Character animators
    • Texture/Material builders
    • Lighting experts
    • IT personnel that keep the hardware running

    They have their own culture, their own version of Geek Speak and a whole lot of new TLAs (three letter acronyms) to keep you guessing as to what they may actually be talking about.