You are on page 1of 107

Accepted in partial fulfillment of the requirements for the degree of Master of Architecture At The Savannah College of Art and

Design

________________________________________________________________________ Scott G. Dietz, Professor of Architecture, Committee Chair Date

________________________________________________________________________ Ming Tang, Professor of Architecture, Faculty/Editor Date

________________________________________________________________________ Malcolm Kesson, Topic Consultant Date

Analysis Data as a Design Generator

A Thesis Statement Submitted to the Faculty of the Architecture Department in Partial Fulfillment of the Requirements for the Degree of Master of Architecture

At

The Savannah College of Art and Design

By

Dumitru Dima Chiriacov

Savannah, Georgia May, 2010

Acknowledgements
I would love to thank everyone who supported my work and growth over the last several years here in SCAD:

My committee members Ming Tang and Malcolm Kesson for investing their time in my work, their enthusiasm, expertise, experience, and many useful ideas that affected my thesis. It was fun!

My SCAD tennis coach Chuck Keenan for being supportive and flexible, as well as for being responsible for bringing me to SCAD, thus sharing responsibility for all the great things that happened to me while being in Savannah.

Dr. Rafi Muhannah for his expertise and allowing me to attend his Intro to Finite Element class at Georgia Institute of Technology

My (now extended) family for the support, understanding, and always believing in me.

My very special appreciation note goes to my committee chair Scott G. Dietz. I do believe that without his support I would NOT be able to get anywhere near to what I could achieve in my studies and discovering new valuable skills. Not even to mention that this thesis would turn in to such a fun learning experience and a valuable research. That is why I want to thank him for his support, wisdom, and mentorship during these very productive years at SCAD.

Last but not least I want to thank the most important person my wife Tamara for her love, patience and understanding throughout these four turbulent years. Without your support none of this would be possible.

CONTENT

1 Introduction 1.0 Motivation 1.1 Conceptual Framework and Precedents 1.1.0 Precedents in Engineering 1.1.1 Precedents in Architecture 2 Developing Methods 2.0 Analysis and Data Extraction 2.0.0 Analysis of Urban Conditions and Demographics 2.0.1 Shaping the Site and Building Massing 2.0.2 Building Structure and Envelope 2.1 Custom Methods of Problem Solving 2.1.0 Scripting 2.1.1 Programming 3 Applying Methods 3.0 Site and Massing 3.0.0 Clients Input and Context Analysis 3.0.1 Interpretation: Developing Problem-Solving Algorithm 3.0.2 Interpretation: From Human Language to Algorithmic Expression 3.0.3 Application: Parametric Site 3.0.4 Application: Applying the Algorithm 3.1 Program and Built Volume 3.1.0 Possibilities for Program Solution 3.1.1 Program Solver: Preparation 3.1.2 Program Solver: Solution 3.1.3 Circulation and Egress 3.2 Building Systems 3.2.0 Smart Building Systems. General Thoughts 3.2.1 Smart Building Systems. Glare Control, Aperture Curtain Wall 3.2.1.a Design Process 3.2.1.b Economy 3.2.1.c Software 4 Conclusion Presentation Boards Bibliography

1 3 5 8 9 11 12 13 15 16 21 23 24 26 27 28 31 35 40 44 54 55 61 66 77 80 81 83 84 90 93 97 99 101

Part 1

Introduction

This thesis will explore the application of procedural approach in architecture via direct utilization of project specific constrains and extracted analytical data. The research will target a development of a series of methods where architect will be an interpreter of project constrains and site-specific information, which in turn will help to generate design intent and provide opportunity for this data to become generative. These methods are expected to offer higher degree of precision and efficiency to the architectural design processes as well as to provide better control over the design outcome.

Chapter 1.0

Motivation

Regardless of the great pace with which the tools in architectural/engineering/design industries have been developing lately, one very common problem can still be observed in architecture today. This problem is the underestimation of importance of specific analysis information. The data that is collected prior to actual conceptual design (assuming that analyses were conducted) has very little direct influence on the architectural design process and its outcome. Instead, even if this information has been thoroughly analyzed, chances are that it will only be influencing the design process through the arbitrary assumptions of an architect, being taken in consideration only as a visual reference. Thus, there is a gap between various types of analysis that forerun the design phases and the architectural design itself. Meanwhile, the sets of data collected during the analytical phase can and should be thought of as extremely powerful means by which the outcome of design could be influenced and shaped directly. Based on designers assumptions on value (and compatibility) of the data from different types of analysis it can be utilized to create a design that truly responds to project goals, constrains and specific site condition at each phase of design.

Chapter 1.1

Conceptual Framework and Precedents

It is impossible to imagine a real life architectural project that would not be affected by certain constrains and limitations. There is a great deal of factors of influence (effectors) involved in each architectural project. Every architectural design (with a very few exceptions) is unique in its own way. The things that make each project a special case are numerous side factors affecting it on each stage of design process. These crucial factors could include such things as existing transportation infrastructure, local demographics and population income levels, site adjacencies, local climate, solar paths, clients program requirements, local codes and many more. For each given project the amount of influencing factors would tend to infinity. Of course, they would have different importance value (importance coefficient), but to address all the effectors and constrains becomes a virtually impossible task, and as William Mitchell once mentioned, If one assumes that everything is possible at the outset of designing, the designer must continually constrain the problem in order to arrive at a solution1. Desire to increase the intelligence (and thus legibility) of the design response will lead to involvement of as much influence data as possible. The combination and complexity of these factors and design constrains, of course, create certain challenges for architects. However, these might be also seen as opportunities that create starting points for design explorations and a guiding path towards formulation of design intent. That is why analysis and its data have major importance on every stage of design process. Fortunately, output of analyses that can affect architects design decisions is a precise and straightforward data that is either numeric (as in the case of analyses conducted on solar exposure, stress, demographics,
Mitchell, William J.. The Logic of Architecture Design, Computation, and Cognition. Cambridge, MA: MIT Press, 1990.
1

etc) or geometric (such as GIS shapefiles). Both of these data types very well correspond to what architects have to deal with on a daily basis: geometry and numbers. Thus, I would be logical to assume that this information can become generative. However, usually analytical data comes in very large and complex pieces of information (regardless of the format). In case it is intended to be used to benefit the design and to play its generative part, the complexity of the outputted data has to be somehow dealt with. An important thing to consider is that the sets of the influence factors and project constrains often do not exist independent from each other, thus apart from their nominal value they can also have interdependencies, which in turn dramatically contributes to the complexity of the problem-solving system. It is easy to see how by adding extra effectors throughout design process the set can become too complex for a human (designers) mind to adequately process it. That is when procedural approach comes into play. When working with large sets of data it is important to be able to define explicit rules according to which the data will be manipulated. This can barely be achieved without introducing algorithms into the design process through programming or graphical algorithmic interfaces. This way, the large amount of relevant analytical data can not only be taken in consideration in the design process but tied in to a problem-solving set of rules generated by designer. Direct introduction of analysis data via programming into the design process started being widely implemented not in architecture but in engineering. Some of the reasons for that were to be able to handle complexity of large assemblies and to deal with complex input/output/intermediate calculations. This thesis will be an attempt to apply the design through data application approach on every design stage. Unfortunately

application of generative analysis data throughout the whole process in architecture is not a common thing; however there are many great examples of utilization of analysis data in the process and creation of custom tools via programming for different purposes on selective project stages.

1.1.0 Precedents in Engineering ARUP is engineering and technology consultancy practice that has been taking on some of the most unique and complex (both structurally and programmatically) projects of the past few decades. The most complicated projects very often require a pioneer approach to them and thus creation of custom tools. As Finite Element Method was developed in early 1950s for the purpose of calculating complex structural and elasticity tasks it became widely used in engineering. Because FEM implies a heavy load of calculations (solving multiple partial differential equations), programming became an inseparable part of it. Due to this, ARUP turned to advanced computation to be able to customize their process and adequately respond to complexity. Over the course of years they developed a solid library of custom tools that were written for the purpose of one or another project, which in turn lead to the development of a commercial software OASYS a multipurpose FEM solver.

Figure 1.1 Sample Screenshots from OASYS FEM Solver

However, having such a powerful tool in their arsenal did not let ARUP rely only on OASYS and with each new non-traditional project their methods again undergo customization in which programming has an important role. Adams Kara Taylor (London) is another example of engineering consultancy heavily relying on custom application and project specific data in their process. Their commission on Napoli-Afragola High Speed Rail System where they were collaborating with Zaha Hadid Architects involved a development of custom piece of software that was used for development of building volumes derived from site geometry.2 In this example the site specific information was wrapped into a series of algorithms (based on principles of magnetic fields) that was outputting options for the future train station. Via this software AKT also rationalized the output with the respect to structural accuracy.

1.1.1 Precedents in Architecture In architectural design there are many practices that either experiment with programming in their design processes or use it on a regular basis along with in depth analysis data extraction. There is a need to mention Foster & Partners (and their SMG inhouse consultancy), Zaha Hadid Architects, SHOP Architects, Asymptote Architecture, ROEWU (and many more), as these practices heavily rely on programming to refine and improve their design methodologies and efficiency of the process. As an attempt to improve their design pipeline, Zaha Hadid Architects established an in-house research lab. The members of the research group work on improving the workflows in the office

AKT, Michael Meredith, Aranda-Lash, and Mutsuro Sasaki, From Control to Design: Parametric/Algorithmic Architecture (New York, NY: Aktar-D, 2008), 138.

and creating new custom tools/methods for expressing design intent. Arguably the most intensive stage in ZHA that undergoes customization is the urban stage of projects. The research group developed many tools that explore form finding based on urban conditions. The most common method that they implement is to write custom plug-ins for Autodesk Maya software. Algorithms that these new pieces of software consist of represent the behavior that design team expects of the manipulated geometry and even thought the result is often unexpected it is always a product of programmed intent.

Figure 1.2 Custom Digital Tools from ZHA

According to Shajay Booshan (the leader of the research group in ZHA) the formfinding functions are not the only benefits of their approach. Customization of ZHAs tools lets the design team to test objects behavior (whether it is structure or envelope) under specific environmental or loading conditions, make the information exchange with consultants more smooth, and in the very short period of time (avoiding intensive labor) to generate many design options for evaluation.

10

Part 2

Developing Methods

11

Chapter 2.0

Analysis and Data Extraction

12

2.0.0 Analysis of Urban Conditions and Demographics Overview Very often pre design analysis begins on the larger than the project site scale. Importance of gathering data about the area that goes far beyond the existing site can create way more design opportunities than just analyzing its direct adjacencies. And that is true not only in the case of very large scale architectural projects or urban design projects, but also in the projects of the smaller range. Some of the methods of conducting the analyses that deal with large scale demographic and urban/suburban data collection have to deal with various GIS (geographic information system) type software. There is a very vast amount of GIS data available for free (developed by state funded organizations) or for purchase (developed by private data collection offices). This data comes in the database type of (EXCEL like) files and graphic shapefiles (maps consisted of geometric shapes representing such elements as block, buildings, roads, etc). The database type of files can be connected (attached) to the shapefile and as a result, once run through the GIS software interface, user gets a set of objects with different attributes assigned to them. Starting at this point user can conduct the profound GIS analysis about any area of interest within the file range.

Figure 2.3 Population Income Analysis with ArcGIS

13

Implementation Depending on the design intent there might be a need for a precise statistical data. One of the opportunities that GIS data offers is the analysis of area of interest based on various demographic attributes. User can query the statistical information analysis based on attributes assigned to geometry (like city block, buildings, etc) in the shapefile. For instance, city blocks can be compared based on racial distribution, certain threshold of average household income, average house value, etc. All these queries can serve as a graphic representation and be easily outputted to PDF maps and used in presentations.

Figure 2.2 Demographics Analysis with ArcGIS

However, this data can be taken even further in order to participate in generation of actual design. For example, the density of population within a city can be extracted through Excel tables as numeric values. These numbers afterwards can be unitized, joined with RGB values and run through a script within a 3D modeling software to generate height fields. This might be used for the purpose of zoning and to determine which areas are overpopulated and need an increase in allowed building height. Another example of GIS output utilization would be to extract a combination of numeric attributes for a group of city blocks based on unemployment rate, income level below a set threshold, property 14

values. These table then can be run through a custom application that will determine shared boundaries between the well doing neighborhoods and problem neighborhoods, which can represent the placement for the urban revitalization program.

2.0.1 Shaping the Site and Building Massing Overview The following step after the demographics and urban setting analysis might be to begin shaping the boundaries of the site and defining the masses of the buildings envelops. It is very important that the data collected during the previous stage of analysis and its interpretation is not lost along the way. Cities function as dynamic organisms. They undergo constant change in multidimensional space. They constantly deform, thus constantly evolve all their components such as nodes, axes, landmarks, transportation networks, green space, etc. All these components are important and thus have to be taken in consideration while shaping the site boundaries and building massing.

Implementation Urban components extracted out from precise GIS source can be assigned geometric properties (or elements) in the following way: axes as lines or curves, nodes and landmarks as points (or array of points), transportation network as a grid, boundaries, etc. For each component that will be affecting the geometry of the site and the building

15

volume its own unique importance value has to be assigned. In order to execute the form generating process there is a need for a custom application that will run inputs (the urban components) through the code to generate the output. One of the ways that this can be accomplished is through generation of vector field, which could be affected by the tensor objects (in this case important geometry and components of a city). One interesting precedent of implementation of the vector field generation approach is Napoli-Afragola High Speed Rail Station developed by Adams Kara Taylor in collaboration with Zaha Hadid Architects. In this project the volumes for the train station were developed based on the result of the tensor field analysis. Custom software was developed for this particular project and the inputs (the affecters) were the trajectories of the adjacent site geometry. This way the precise data extracted from the site produced the volume for the future building.

2.0.2 Building Structure and Envelope Overview Once the building massing is defined and some space planning decisions are made one of the next steps could be the analysis of the envelope and structure. The outcome of these analyses could resolve a lot of things, such as manufacturing issues, building exterior appearance, health and sustainability of the building, structural stability and many more. Its crucial to properly interpret and to transfer the structural and environmental analysis data directly in the design process to avoid designing based just on assumptions about the results of the analysis. This can cause serious lapse in design

16

implementation and possibly eliminate any positive effect that analysis could bring to the outcome. There is a great number of software that specialize in FEM (Finite Element Analysis), behavior simulation, geotechnical analysis building performance analysis just to name a few. Some of the software brands of this type are Ecotect (building performance analysis), GBS (energy performance), IES (building systems simulation), Algor (FEM Analysis), and Abaquis (FEM Analysis and Lifecycle simulation for CATIA).

Implementation FEM and Building Performance Analyses if properly utilized can affect the overall design a great deal. On this stage of the project development (when the massing and partial planning is done) lapses in analysis interpretation can cost a lot of money and reduce the building performance. The good news is that the information (after the FEM and performance analysis is executed) is available not only as a visual reference (user can see the output and the behavior of the model via analysis software user interface) but as a precise sets of data usually outputted in Excel spreadsheets or database files. There is a need to discuss some possible design implementation possibilities. For instance, the surface of the building volume might be outputted as a mesh to FEM analysis software and checked for stress and deformation by assigning the distributed load to it. The values of the analysis are outputted as a stress value for every mesh vertex and brought in to the spreadsheet. It is logical to conclude that for this surface (envelope) to successfully respond to the loads and have a structural value the material thickness has to be related to

17

the stress value at each point on the envelope.

Figure 2.3 Von Misses Stress Analysis with ALGOR

Next, the same surface can be checked for solar exposure. The envelope of the building (the resultant volume of the previous design explorations) can be checked for the incident solar radiation based on weather file for the area. The output of this query comes as a set of data that contains the locations of the analyzed points on the mesh, the normal vectors and the values of the solar exposure for each particular point on the mesh (or for each mesh face). Again, all the data can be outputted in a single spreadsheet.

Figure 2.4 Solar Radiation Analysis in Ecotect

The information from both of these analysis explorations can be used to generate and manufacture the structural envelope panels with fenestration components embedded in each panel, whose thickness will depend on the curvature of the surface and resultant

18

stress values. In order to achieve this, base surface can be taken back in to 3d modeling software. There will be a need for a custom script that acts the following way: - scales the values of the stress analysis based on the max/min range and outputs array of values of envelope thickness proportional to stress value for each vertex on mesh - offsets the base mesh vertices to the scaled stress value - generates a new mesh from the offset points (that is the outside surface of the envelope) - creates a panel component with fenestration opening in between each of the bottom and the top boundary meshes (fenestration aperture value is a reverse of scaled solar exposure value for each quad on mesh)

Figure 2.5 Surface Population with Solar Radiation Driven Aperture Components

The methods of design generation through analysis proposed earlier, show how important the extracted data can be in different stages of architectural design. In this case the decisions made by an architect have to deal with interpretation of local conditions, existing factors and constrains of the project. Direct utilization of analysis data reduces the possibility of lapse throughout the duration of design process versus the case in which the analysis is used just as a visual reference for the architects assumptions. Moreover,

19

the design output produced by the means of these explorations is truly specific to location and goals set for each stage of the project.

20

Chapter 2.1

Custom Methods of Problem Solving

21

In attempt to create a design that addresses all intent-driven project-related factors one would have to use advanced computation to create custom design systems (tools) that tie together all the important factors and their interrelationships. The problem-solving design systems can be referred to as a new type of tools produced by designer/architect to address and solve a particular design situation. Cristiano Ceccato in his paper Integration: Master [Planner | Programmer | Builder] refers to these new type of tools as Third Generation Tools. A new generation of tools is thus envisaged to achieve the Third Generation paradigm in question: these can be understood as pure tools in the sense that they operate on a completely different basis of understanding than conventional CAD tools. These tools will enable the architect to approach the task of design in a completely different manner; not by functioning as graphic translators or organizers, but by requiring input in the form of rules, gestures, goals and parameters, and a defining grammar which governs the combination thereof.3 Defining the Third Generation paradigm that revolves around programming, Cristiano Ceccato in his paper refers to First Generation design tools as CAD type software, and Second Generation design tools as design software that is available today that allows collaborations, complex document output and parametric functionality. The methods that allow designers to customize their approach to complex problem solving tool-creation are programming and script-writing.

Cristiano Ceccato, Integration: Master [Planner / Programmer / Builder] (paper presented at the IV Generative Art International Conference, Milan, Italy December 11/14, 2001).

22

The idea behind both of these methods is very different however they share a common ground. Both methods, whether it is programming or scripting, in order to reach end goal, rely on one or a combination of (scripting or programming) languages.

2.1.0 Scripting The methodology of scripting is a part of concept that is called End User Programming. End User Programming is a concept in computer science that offers several methods that allow End Users (non-professionals in programming) to create, automate and customize functions and tools within software of interest to achieve desired results. Scripting language usually works as a plug-in or an embedded function is a form of end-user programming usually embedded into software. A good example that is applicable to architecture and design would be various types of 3d modeling software intensively used in the industry, where the majority has scripting possibilities. For instance, Rhinoceros 3d (by McNeel) has a plug-in RhinoScript a scripting language, based on VBScript with multiple native to the software functions/methods, which are added on top of original VBScript functions/methods. Maya (animation/3d modeling software) offers its own scripting possibilities that are widely used such as MEL (Maya Embedded Language) and Python plug-in. These scripting options allow wide possibilities of their utilization for design purposes such as accessing/creating geometry on the root level without being limited to predefined tools included in software package. In this scenario designer has to have an understanding of the way the geometry is being created and how it is being treated by the software. The good part about it is that in this

23

case sets of explicit rules can be coded to generate geometry derived from project factors of influence and project constrains. As an example of a first - hand encounter with coding for architecture the impression of a student from Parametric and Generative Tools for Design and Fabrication workshop (mentioned in Yanni Loukissas thesis) was used: Workshop Student: There is a disconnect between what you need to know to design and what you need to know to code. You have to know too much to code all pointsexample of apertures. I can sketch a shape, but to codify it I need to know all of the geometry that makes up that shape.4

2.1.1 Programming As it was mentioned earlier, there is a serious difference between scripting and programming apart from the fact that they both rely on computational languages. While scripting is the way for navigating through and customizing ones work within a program (and often even building systems within it), programming is actually the way to create individual programs. In order for an application to be utilized its source code written in programming language has to be compiled in to assembly language to generate executable files, which means that from programming language it needs to be translated to machine language. Whereas, in the case of scripting written code remains unchanged and runs in the form that it was written by the programmer. In general, programming allows for almost no limitations from the design point of view. For tasks and systems that
Yanni Loukissas, Rulebuilding: Exploring Design Worlds through End-User Programming (masters thesis, MIT, 2003), 35.
4

24

cannot be described by the means of scripting due to certain limitations of software there is always a virtual possibility to create an application that would be based on the design intent and accommodate functions of desired variety. Architectural profession knows quite a few examples when designers resorted to custom application writing in order to solve problems due to complexity of input/output. One of the examples is custom software that was written by AKT (Adams Kara Taylor) for Zaha Hadids NapoliAfragola High Speed Rail System project. The software was intended to generate a pointcloud that would reproduce future building volume based on geometrical input of site elements.5 Apart from this example offices like Foster and Partners, IJP, ARUP and some others that have staff units that heavily rely on programming for various purposes. Accommodating multiple design intent driven factors by combining them into a system is not an easy task nor is it easy to generate a desired precise design output that would reflect design intent. These types of tasks require a custom approach to them with introduction of computation in order to handle the complexity of the system. Thus, being aware of possibilities of programming and machine logic becomes essential for an architect to participate in creation of systems of explicit relationships between the designinfluencing factors. In order for a design team to be able to generate tools that would allow for a custom approach to one or another design problem there is a need for new knowledge, skills and mindset.

AKT, Michael Meredith, Aranda-Lash, and Mutsuro Sasaki, From Control to Design: Parametric/Algorithmic Architecture (New York, NY: Aktar-D, 2008), 138.

25

Part 3

Applying Methods

26

Chapter 3.0

Site and Massing

27

3.0.0 Clients Input and Context Analysis Working with site and its urban context very often initiates design process in architecture. It is also a part of the project when designer is introduced to clients design expectations and demands to program. The decisions made on this stage most likely will become crucial to further development of the project, thus it is important to back them up with profound analysis and precise data. The idea for the urban design and context analysis part of this thesis project is to maximize the design output and the efficiency of design decisions based on clients project demands. The designers goal is not only to collect the clients demands to performance of the program and expectations about the final product, but to properly analyze this information and turn into a system that will be able to adequately address all the clients challenges and to come up with a design solution of a maximum efficiency and appropriateness. This stage of the project consists of a series of exercises that attempt to mimic real life interaction between designer and client. The client (thesis instructor) played major role in site choice and formulation of the design goals. The designers (student) role here is to appropriately interpret the clients demands and to turn them into a problem solving algorithm that will respond to these demands. The real challenging part of this type of designer/client interaction is that clients ideas are expressed in human language, but in order to address this design problem via computation the human expression of the problem has to be translated in to the algorithmic expressions and language that is best suited for digital environment in which the problem will have to be processed. For example, in order to get an output in the form of geometry within a 3d modeling software the algorithm (script) has to take

28

arguments in such form that is understandable for that particular software. In this case it would usually be geometrical objects or numeric data. To organize thought process and action sequence a plan of actions was thought out for the Context Analysis and Massing stage of the project in order to optimize the translation process from human to machine(algorithmic) language. The plan is split in to three stages: input collection and analysis, input grouping and interpretation and analysis / solution testing. - to collect clients information / demands for the project output - to analyze clients input and project context - to interpret and group the input - to produce a solution as a robust and flexible algorithm that would account for multiple factors - to identify and conduct needed and best suited analyses - to produce building volume(s) influenced by clients priorities - to test this solution in context of other site(s) During the input collection stage client selected a site and expressed his demands and desires about the project outcome. The site is in Atlanta, GA right next to one of the busiest highway interchanges of the country. It is located on the territory that belongs to SCAD Atlanta and is delimited by Spring Bufford Connector, Spring Street and 18th Street. The location is in direct proximity to two major highways I-75 and I-85. The site, located on SCADs campus is surrounded by the academic building and a parking building on the west. The campus continues across the Spring Bufford Connector where SCAD student dormitories are located. On the north the site borders with a site

29

occupied by a historic building that belongs to Peachtree Christian Church. The site has a visual proximity to the Atlantic Station, which is the closest entertainment and shopping development. The highway (I-85) in the north-western direction there is an Atlanta Amtrak Station an important transportation node (railway) of a mile away. There are no subway stations in the direct proximity to the site - the closest Marta subway station is the Arts Center Station (1/2 mile away south east).

Figure 3.1 Current Site

The client expressed his concerns regarding the site as well as his primary requirements and goals for the project. These issues had to be analyzed by the designer and turned into a problem - solving algorithm in order to satisfy the project goals.

30

Client input: 0) Connection to main SCAD building 1) Connection to parking 2) Connection/Access to the dorms through tunnel 3) Connection to rail-station (and MARTA system) 4) Acoustics barrier with the highway 5) Barrier with the historic building 6) Link to downtown 7) Link to Atlantic Station 8) View from the highway to the site (I-75/I-85) 9) More elevation

3.0.1 Interpretation: Developing Problem-Solving Algorithm The clients input has great importance value for the designer and definitely turns to be a starting point in the design process. However, since it is expressed in human language and does not have much value in the current form once it is needed to be dealt with by means of computation. That is why all this information has to be thoroughly analyzed, regrouped and interpreted in a form that can become a design solution in an algorithmic form. First of all there is a need to determine the different categories that the problem will be approached taking in consideration available tools. As it was already discussed in the previous chapters, design problem will be approach by the means of scripting (procedurally) and the output has to be generated as geometry, which means that

31

work will be done within a 3d modeling software via scripting in order to be able to procedurally represent the design intent. The concept for the problem solving algorithm is based on the idea that all the components of the design intent can be tied directly to real (physical) elements on the site (i.e. site geometry) or particular urban elements such as nodes, axes, urban grid etc. In order to make those useful within a context of 3d modeling software all these components have to be transformed into actual geometrical instances to be able to be passed as variables. For instance, variables of node type can be transformed into 3d coordinates (points) or spheres, axes (depending on their purpose) can become curves or vectors and urban fabric can be represented as a network of curves or surfaces wireframe grid. Taking this concept further the site that will be affected by design can be represented as a nominal unaffected pointcloud that fills up the space of the site. Geometrical representation of the design intent components must have effect on the geometry of the site (the pointcloud) in order to initiate the design
Figure 3.2 Node Vector Forces Extraction

process. It would be better to say that the site geometry is affected by a

combination of forces that the components of the design intent radiate. But how one can extract forces from these (already) geometrical instances? One way to approach this problem is to go deeper into properties of geometrical objects. In science forces are

32

usually represented as vectors. These quantities have two properties: scalar value (magnitude) and directional representation. Forces in the form of vectors can be extracted from any one of the earlier discussed geometrical objects. For example objects of node type represented as spheres can expose vectors (forces) generated from center of each sphere to point of spheres subdivisions. This way to control the forces that a node radiates one will need to assign sphere radius, which will turn into a magnitude of the vectors that the node produces. The amount of vectors can be controlled by the subdivision of the surface of the sphere.

Figure 3.3 Axis (Normal) Forces Extraction

Objects of Axis type can produce forces (vectors) as well as nodes can. Axes are represented as curves, and curves can produce vectors in different ways. For example, a curve can be tested for tangency (tangent vector) at any given parameter on this curve. This operation will produce a vector that is tangent to the curve at tested location. In other words if the direction of a tested axis is important it can be sampled in multiple location on the curve that represents it, and in any case each of the tangent vectors produced by this operation will represent the direction of this axis at every sampled location. Another way of extracting vectors of a curve is by querying curvature data at

33

any point. The way it works is that the vectors are created from sample point on curve and center point of curvature circle and then this vector has to be unitized (reset magnitude to 1) and multiplied by the value that this force is needed to be. As a result designer gets series of forces that are normal to tested axis in each sampled location. (This method would not work with curves that have infinite curvature like a straight line, but it can be easily worked around by generating vectors perpendicular to the line instead of querying curvature values) The type of vector extraction (as in the case of Axis geometry) becomes important when objects of same type have to serve different functions. For instance, in case where tested axis plays role of a dynamic object and its direction is crucial the way the method of force extraction has to be set to tangent because in this case the extracted vectors represent the direction of the curve (for example a part of a highway). Normal vectors can be queried for axes-like objects of a static type where the direction doesnt need to be emphasized and the tested axis is affecting site geometry via its curvature (normal) direction. Also, based on the interpretation and purpose of the geometry the orientation of the vectors becomes an important property. If an object is affecting the site as attractor it the vectors have to be facing outwards in the opposite direction from the target object; in case an object is repulsing the target object the vectors have to face towards it. The same concept of attract versus repulse holds true not only for objects of the Axis type but for Nodes as well. Once the vectors (forces) are generated for the all of the effecter objects each point in the pointcloud (on site) has to be affected in the unique way in so the complexity of input factors can be properly satisfied. The procedure responsible for the

34

customization of effect of each point on site and the logic behind it has to deal with idea that same vector (force) sampled from one point on a certain effecter object has to be of a different magnitude (power) for two points in the pointcloud that are differently spaced away from the sample point. That is why all the sampled vectors will be tested for distance to each point in cloud and scaled down according to where they fall in the distance range. Then scaled forces are summed up and unique resultant vector is produced for each point in the pointcloud. After that operation each point in cloud is translated based on the resultant vector that is appended to it and the cloud is deformed.

3.0.2 Interpretation: From Human Language to Algorithmic Expression The algorithm discussed earlier exposes a need for appropriate interpretation of the design intent and (in this case) clients input. The human language input must be appropriately categorized in order to determine which properties of algorithmic input it will be responsible for. It is a thorough step by step process that will require grouping of client input into the categories somewhat related to the geometric properties of the input variables for the solution discussed in section 3.0.1. First, different issues pointed out by the client are mapped on site in order to better understand their possible geometrical interpretation with a respect to the problemsolving algorithm (from 3.0.1).

35

Figure 3.4 Mapping Clients Input

Based on Figure 3.4 the designer began to analyze possible interpretation of client information into the language of algorithmic variables. From this diagram some things begin to make sense, for example that in the input #8 regarding the view from highway to site different portions of the highway and the connector from where the site is visible can be treated as axes which directions are important, thus following the logic of the concept from 3.0.1 these axes would be sampled to extract the forces in their tangent direction. Also, inputs #7 and #8 that describe the link to Atlantic Station and the link to the downtown due to their relatively large distance from the site can be treated as attractor nodes in those locations. Even though based on the diagram from Figure 3.4 several 36

issues and their implementations begin to fall into place there are still many factors left that are unknown. To proceed to better understanding of clients input and its interpretation designer needed to organize (really, determine) and group the categories under their role in the problem-solving algorithm. For this purpose all the user input was subdivided into two categories of issues that either have Visual or Physical influence on site (pointcloud).

Figure 3.5 Categorizing Variables

From Visual and Physical categories the input parameters were further remapped in to more categories such as External Objects, Orientation and 37

Z/Elevation. The variables grouped under the External Object group will have the ability (or potential) to influence the creation of objects that extend beyond the boundaries of the site, or basically play a role of external geometry creation such as connectors to the parking and main SCAD buildings as well as generation of acoustic barrier geometry and physical access to the dormitories across the Spring Bufford Connector. Variables grouped under Orientation category will have a direct influence on the orientation of future building elements. Those are Node-type affecters link to downtown and link to Atlantic Station, Axis-type affecters of view from highway to site and the acoustic barrier affecter. Variables in Z/Elevation category will be directly controlling the Z direction (elevation) of the pointcloud. Another important aspect that needs to be determined is the Importance Value that will be assigned to the affecter geometry. Again, this has to deal with bringing the Human Language into the algorithmic expression. The Importance Value of an affecter on the level of the human input has to deal with a hierarchy among the affecters, and translated to the algorithmic expression represents a parameter of an object that deals with value assigned to the forces that it emits. A unique range of values has to be chosen for each document where the Importance Values (parameters) of each object in this document belong to this range. For example, if some affecter object A of a Node-type has an importance value of 3 then the radius of the sphere that represents it is set to 2 (doc. units), thus the vectors that it emits are initially multiplied by 2, which becomes their magnitude. Same holds true with the objects of the Axis-type. The vectors that each of these objects emits, originally unitized, are passed through

38

multiplication operation, which makes their magnitude (thus force value) to correspond with objects Importance Value.

Figure 3.6 Parameters Informing Affecter Objects

The Importance Value concept used by this particular algorithm falls under the category of Parameters that are assigned to affecter objects. Starting at this point it is crucial to understand how to treat the objects Parameters. Based on the clients input analysis the thing that needs to be determined is whether the Importance Value of an object has to deal with clients priorities in the hierarchy of his input or it has to be determined in the analytical way. For instance, there can be a priority given by the client to the visual link to the Atlantic Station over the visual link to the downtown, which would make the node that represent the Atlantic Station to emit forces of a higher magnitude (wherever it falls in the range) than those of the node representing the 39

downtown. However, in the case of instances of the same type that are represented as separate affecter objects like the axes assigned to different portions of the highway there is a possibility to distinguish their value based on analysis data. The portions of the highway can be tested based on traffic density/intensity and the output of this analysis will give a range of unique values for each piece of geometry.

3.0.3 Application: Parametric Site The Parametric Site part of this thesis is a small bonus discussion to the actual description of the site related client-based problem solving algorithm. In this portion the author tries to highlight some advantages as well as stresses the importance of the parametric relationships in a project model. The example presented here is directly related to the site exercise and talks about the way computer model for the testing of the site algorithm was set up. As it was already mentioned in the chapter 3.0.1 the plan was to develop a problem solving algorithm that would have a property to be applied not just for the current site but for many other different situations. The perfect scenario was to test the algorithm on different site conditions. It is not a secret that to build a computer model for site conditions is a rather tedious process that takes precious time. For the time saving and flexibility purposes the decision was made to invest some time into a parametric definition that could be used to fast and easy produce a working digital model of any site for testing and further development. The parametric definition was executed in Grasshopper prior to the development of the actual problem solving algorithm (which

40

would be developed in the Grasshopper as well). The decision to use Grasshopper was not only because of the intention to build the main algorithm in it but also because of its parametric capabilities and the flexibility of making adjustments. After the creation of the parametric site definition it could be easily copy-pasted into the main problem solving algorithm and this way merge them together.

Figure 3.7 Part of GH definition responsible for the parametric site

41

Figure 3.8 Parameterized Site top, user-drawn geometry bottom (xy plane)

The idea behind this parameterized site was to import a to-scale image of the site context (top view) and to draw curves for the centerlines of the highways and other major roads. The next easy step was to produce the outlines of the buildings and to set them to different layers based on their height (low, medium, tall). Then, the road-curves are assigned into the GH definition to their corresponding components, which sample these curves and produce equally spaced points. The only other user input is to assign real-life elevations at these particular points (which can be easily taken from Google Earth or from more precise site survey). The actual GH definition takes care of the rest: - rebuilds the road-curves at the appropriate elevations and based on that the actual highway geometry (user needs to manipulate the widths with sliders) - produces the actual ground based of the user input elevation marks

42

- assigns 3d geometry of the buildings on site (after user assigns the drawn building outlines to 3 corresponding components by height) and sets their bottom to the appropriate elevation based on the zone that they belong to - by drawing the outline of the actual working site on XY plane and assigning the curve to its corresponding GH component the definition produces adjustable

pointcloud with density and height settings These minimal drawing efforts produce as a result a site context model that is completely parameterized and can be adjusted just by tweaking needed values inside the GH definition. As a result, all the 3d geometry exists strictly within the Grasshopper code and the only physical geometry inside the Rhino file are the user-drawn curves for the roads and the context buildings. These facts thus imply that any site of the same detail resolution can be quickly produced by reusing this GH code. That saves a lot of time in a long run that could be used for other more important tasks. As a remark it has to be mentioned that Grasshopper parametric capabilities are used in this case for the convenience of the author, since the execution of the main problem solving algorithm was intended to be done in Rhino/Grasshopper. However, this functionality could be achieved in other software/plug-ins that has parametric capabilities, for instance CATIA/DP, Revit, Houdini, customized applications for Maya etc.

43

3.0.4 Application: Applying the Algorithm

Figure 3.9 Grasshopper definition (and its components) of the final algorithm

Based on the theoretical plan/logic for the Context Analysis and Massing part of the thesis that was developed in the earlier chapters (from 3.0.1) the problem-solving algorithm was developed centered around the clients input and his desires regarding the project. The final GH code and all of its parts are shown on Fig. 3.9. Fig. 3.9 shows the final (at this point in the project) Grasshopper code and all of its part. As an output it produces the parametric building shape/volume with the floor slabs, numeric data associated with them and geometry for the physical connectors to the buildings mentioned in the clients list (main SCAD building, parking building, and SCAD

44

dormitories across the highway). As it mentioned in the previous chapter (3.0.3) the final GH definition is a product of two joined ones: - GH definition responsible for the parametric site context (which also includes code that produces the pointcloud (to be affected) that represents the site - GH definition responsible for the overall manipulations with the pointcloud, all the parametric controls and settings for the affecters geometry, generation of the NURBS envelope, working with zoning code and many other functional options. The purpose of this chapter (3.0.4) is to introduce reader to the actual sequence of events (and different pieces of code that are responsible for their functionality) that the initial site (as the pointcloud) has to go through in order to arrive to the state of the NURBS building envelope that will be taken to the next (more program related) stage of this thesis. The part of the GH definition that follows the one responsible for the site context controls the Z-ELEVATION of the affected site and the FLOOR PARAMETERS and COUNT. The pointcloud is generated by means of z-axis projection of the site outline drawn on XY plane (on bitmap) to the actual ground level with the proper elevation mark. This boundary is being turned into a planar surface and z-extruded whether to the height of the desired height for the future building or to the mean height of the surrounding buildings in this zone. In any case the height of extrusion for this volume will roughly correspond to the height of the final building mass. This extruded volume thus becomes a domain for the pointcloud. The points in cloud are generated based of the user defined grid (controlled by user) on the plane of the site and then arrayed in z direction to fill up the domain. Some of important things to consider here is the point grid

45

density higher density achieves higher resolution of the algorithm implementation for the final envelope but the recalculation time can increase dramatically and slow down the definition. Another point to consider is that the number of array in z direction set up the (+-1) count for the desired amount of final floors, which is another parametric control of this designer/client decision. Last but not least, this part of the definition extracts important site parameters such as AREA, VOLUME, PERIMETER, FLOOR-to-FLOOR height etc, which (along with the actual point-geometry) get to be used further down the stream in the definition.

Figure 3.10 Pointcloud Settings, Controls and carried-over Properties

The next significant part of the GH code down the stream is of the major importance because that is where all of the necessary manipulations are executed related 46

to the affecter geometry starting from the importance settings for each of them, settings for their TYPE based on which (different types of vectors can be extracted) etc. The final definition in its current state ended up having three affecters on the NODE type and two affecters of the AXIS type (which include more pieces of geometry within themselves). In this part of the paper the author will attempt to comment on the final settings for each affecter piece of geometry and how those relate to the clients input/desires.

Figure 3.11 Affecters Types and Importance Values

To start off with the NODES, the first affecter corresponded to user input related to DOWNTOWN LINK. The actual geometry was tied to the location of the Georgia Capitol. The coordinates were extracted as well as the elevation mark and this information was assigned to the point that generated the sphere for the vector extraction. At this point reader should refer to Fig. 3.11 for the affecters type and values. During discussion with the client it was determined to assign affecters importance values based on a 0-8 scale. Decision was made for the DOWNTOWN affecter to give it a value of 6 and vector-emitting type to ATTRACT. The decision was made based on the property of the ATTRACT type to generate the resultant vector pointing the opposite direction from the pointcloud which would make the geometry to be attracted to the affecter. The second node corresponds with the clients desire for the ATLANTIC STATION LINK. 47

For this particular affecter the code takes the centroids for each building related to Atlantic Station development and calculates the average point, which becomes a center for the sphere of this affecter. Since the property for it was decided to be similar as for the DOWNTOWN node to attract the geometry, the type was set to ATTRACT (to produce oppositely oriented resultant vector) and the importance value was set to 4 out of 8. The last (third) affecter of the NODE type was dealing with the issue of BUFFER TO THE HISTORIC BUILDING that was mentioned by the client regarding the Christian church adjacent to the site. The center point was coincided with the centroid of the building and the type was set to REPULSE since the property of this type would produce a resultant vector for a node that would be pointing towards the pointcloud and this way contributing this direction into the final vector. The affecter value was set to 2 since it was considered to be a low priority issue. As it was already mentioned there were two affecters of the AXIS type set for this exercise. Unlike the NODE affecters that each has only one actual affecter it is different with the AXES. The AXIS affecters correspond to the VIEW FROM THE HIGHWAY and NOISE FROM THE HIGHWAY clients concerns. However, under each of those affecter types (that receive their own importance value number) there are several separate pieces of geometry that get another level of importance values to distinguish them within each AXIS affecter. It will become clear once the description of those will occur. From the conceptual description of the algorithm in chapters 3.0.1 and 3.0.2 one might remember that there were four curves that corresponded to the parts of the highway from which the site was visible. All of those became the actual affecter geometry of the

48

AXIS type that were grouped under the VIEW FROM HIGHWAY AXIS. Based on the communication with the client this affecter was given the highest priority out of all by assigning it VALUE of 8. This means that each of the separate curves received initial value of 8 and their vectors became of this magnitude. However, since each of these four might have different affect on site based on the traffic density they were assigned an additional multiplier in the range of 1-0 which made the final magnitudes of their vectors differ from one to another. In addition to that the situation with the axes is a bit more different then with the nodes regarding the settings about how they emit their vectors. Here the author and the client had to choose from TANGENT vs NORMAL and ATTRACT versus REPULSE. The decision was made for each curve to ATTRACT (vectors are oriented in the opposite direction from the site) based on the direction of the traffic. In addition to that 3 out of 4 curves received TANGENT type since they were relatively remote from site and it had to deal with the property of the direction of each curve and one of the four curves that was relatively close received the NORMAL type based on the way the viewers would interact with the site. The second AXIS affecter group dealt with the NOISE FROM THE HIGHWAY. It included two curves that were derived by projecting the sound cones from the centroid of the site to the closest highway (Spring Buford Connector) broken up by the acoustic buffer of trees and the surrounding buildings. The projection of the cones to the highway became two curves of AXIS geometry. The ACOUSTIC AXIS received the initial important value of 4. In order to distinguish these two curves within the acoustic axis the decision was made to run some acoustic tests on the envelope (produced by all the affecters without plugging in the ACOUSTIC affecter). The center points of the

49

curves become the locations of the sound sources. Site geometry and the envelope were brought to Ecotect and two acoustic analyses were run (one for each source). The goal was not only to precisely determine which sound source has more affect on the envelope but also to express that relationship numerically. In order to achieve that the analysis values (Number Rays Hit per mesh facet of the envelope) were extracted in Excel documents and another GH definition was set up to run through the files in order to compare them and set the range values as a second multiplier for the ACOUSTIC axes (Fig. 3.12). Finally, both of the acoustic axes were set as NORMAL and REPULSE types of the vector emitters.

Figure 3.12 Comparison of the Acoustic Effect of the two Sources

50

The rest of the definition part that deals with affecter vectors samples the geometry and generates vectors based on the values and settings assigned to each one affecter. Note that NODE affecters (spherical) go through a scaling process based on their orientation to the site so their vectors dont cancel each other out. The separate arrays off vectors and point coordinates at which each vector was sampled are sent downstream to the next portion of the main GH definition. The following portion of the GH code is remarkable because there the cloud finally gets affected by a unique translation vector calculated per each point in cloud. Prior to that all of the vectors and point coordinates at which they were sampled are collected into two arrays (one for vectors and another one for points). Then, per each point in cloud the distance to each sampling point is queried. Based on the max and min values each vector is scaled accordingly (1-0 range) based on the logic that the closest vectors stay more effective that those that are further away. A unique set of scaled vectors created per each point in the pointcloud is then undergoes mass addition to produce a resultant vector (the translation vector) based on which each point is translated. Since the resultant vectors shoot out the pointcloud far away from the site, the affected cloud is brought back on side using a vector between old and new pointcloud centroids. Next, the cloud is fit back in to the boundaries of the initial cloud. The actual NURBS surface is a product of sections through the isosurface generated by assigning a small charge to each point in the affected cloud. This is done for the purpose of unifying the cloud. However, the isosurfaces are very hard to work with and to take any further for the design purposes. It was mentioned earlier that the envelope surface produced on this stage of the project had to be proper and geometrically clean in

51

order to keep working with later. Because of that a clean lofted NURBS surface had to be generated. The sections trough the resultant metaball are cut with planes that correspond to the floor slab settings from the part of the definition that dealt with the cloud generation. Then, these polylines were rebuilt into NURBS curves.

Figure 3.13 Envelope and Slabs

Just to generate a clean surface does not mean very much unless designer has a parametric controls over it in order to be able to follow through with it to the further stages of design refinement. Because of that another custom component was added that was responsible for working with city code requirements for this particular zone. The FAR (Floor Area Ratio) for non residential buildings was suggested to be 3 and the 52

maximum site coverage limited to 85% and this custom component took this data as input as well as the area of the site and the section curves through the affected isosurface. The algorithm queries the area of the new floor plates (the section curves) and calculates the current relation of the site area and total floor area produce by the currents envelope. Based on that number the scaling value is calculated to scale up or down the section curves. Within the same component the curves are used to finally output the envelope surface and the floor slabs that are adjusted according to the local code requirements (Fig. 3.13). The final part of the GH definition generates additional geometry responsible for the physical connectors to the main SCAD building, the parking and the dormitories, which was all requested by the client. At this stage this part is still work in progress. The tunnels/connectors are generated but they are based on connection between the centroid of the resultant envelope and corresponding buildings by default. Later, at the planning stage of the project the intent is to map the existing program in the buildings and tie the points that generate the tunnels/connectors to a particular program components (for instance from lobby of one building to the lobby of another building).

53

Chapter 3.1

Program and Built Volume

54

3.1.0 Possibilities for Program Solution Since the algorithm that produces the building mass was successfully developed the next stage of the project relates to Building Program and its components. The goals that are set for this stage are rather ambitious. Basically, the project is still intended to mimic real-life interaction between client and designer and it assumes that the client delivers a more or less detailed program that has to be optimized by the designer. Taking in consideration that there is already a potential building mass developed (from the previous phase) that will serve as a domain or boundary for the program components offered by the client. This doesnt necessary imply that the building mass produced earlier will stay intact. Instead it will rather be considered as a flexible boundary for the manipulations with the program with the ability to withstand minor adjustments if needed (the building mass will definitely undergo some significant changes on the further more detailed stages of the project when the mass will be treated as an actual envelope that will have to meet certain performance criterion). The program development becomes a very specific matter since the requirements for each program component are usually very straight-forward and have to be met in order to satisfy appropriate building performance and clients needs. Regardless of the complexity of the program (and its components) it can be seen as a system of objects with their own unique properties. In order to have a successfully functioning system not only requirements for the separate program components have to be met, but also the interaction (proximities) of the whole system in general have to be satisfied. That makes designer to face a problem that deals with high degree of complexity. Thus, in order to provide an appropriate solution this complexity needs to be approached with advanced computational methods in order to properly optimize the behavior of the system. 55

As for the previous phase of the project the Program Development will be approached by developing an algorithm that will be capable of solving a given program with all its components and their requirements for a given domain (building mass). Prior to beginning the design of the algorithm some objectives and requirements were set: OBJECTIVES: To produce a system that will work with input program by computing its components position, orientation, size and inter-relationships within an envelope (task domain), and thus to get the most suited/efficient combination for existing conditions. REQUIREMENTS: - has to be able to: SELF ADJUST and SELF ADAPT RECOGNIZE its COMPONENTS RECOGNIZE its bounding VOLUME (domain) have a system of PROPERTIES take REFERENCE DATA as input

There is an opportunity to solve this problem via utilization of EVOLUTIONARY ALGORYTHMS. Generally speaking these algorithms attempt to mimic the natural ways of dealing with complex systems. Behavior of those is unpredictable and is based on the properties of each separate simple component of a complex system. The Evolutionary Behaviors in general can be found in Genetic Algorithms, Artificial Intelligence, Emergent Behavior Algorithms, Ant Colonies and

56

many other fields. An idea that all these systems have in common is that they are concentrated on creating the rules for the behavior in general rather than programming the end result. This means that the result is unpredictable in most of the cases, but it always emerges out of the interaction between the components of the system and their personal properties. Probably the most sufficient approach to take in order to solve the given problem of programmatic distribution is the one that is found in the behavior of Ant Colonies and the algorithms that mimic these systems. Even though the study of the evolutionary systems and their application is a relatively young field (around 40 years) they have been widely used lately for problem solving purposes primarily in Engineering (to find the most efficient solutions), Game Design, World Wide Web and many other fields. There is a need to discuss the principles of the algorithms behind the Ant Colonies behavior in order to understand how it correlates to the given problem (program distribution) and how it can be helpful to solve it. Ant colonies algorithms have a number of simple agents that represent the ants. Even though the behavior of each of these systems (Ant Colonies) is considered to be complex the actual agents are very simple pieces of information with a certain number of properties that lets them interact with the other agents. Apart from the agents another main component of the system is the domain within which the agents interact. Depending on the task the importance of the domain might vary, but the presence of it is a must for the computational purposes.

57

Figure 3.14 Ant Colonies System Components

One of the most common tasks that have been tested within the Ant Colony logic is finding the shortest path to food source. This leads to another two important conceptual components of the system the TARGET of the agent activity. The target in this case is the food source. For computational purposes target is a must since it identifies a reference point based on which agents activity is measured. Stephen Gilmour and Mark Dras in their paper Understanding the Pheromone System within Ant Colony Optimization cover the elements of the Ant Colony Optimization algorithms and particularly the role of pheromone in these systems. ACO uses agents modeled on ants to find heuristic solutions to COPs. These agents communicate only indirectly by laying pheromone in the environment (stigmergy). The more pheromone a particular part of the 58

environment contains, the more desirable that part of the environment becomes to the ants. This is how the ants find solutions.6 Ants lay out pheromone paths and that is the way they communicate between each other. Basically, pheromone can be seen as a success-meter each ant lays out amount of pheromone proportional to how successful its activity was. Pheromone also has an ability to evaporate with time. Fig. 3.14 demonstrates main components behind the Ant Colony systems, their meaning, and some possible application for the purpose of the programmatic development.

Figure 3.15 Program Components and Properties

Gilmour, Stephen. Understanding the Pheromone System within Ant Colony Optimization. (paper presented at AI 2005: Advances in Artificial Intelligence,18th Australian Joint Conference on Artificial Intelligence, Sydney, Australia,December 5-9, 2005).

59

For this exercise the program that was proposed by the client was for the Student Media Center Caf. It consisted of several components such as: - Exchange Space - Collection - Process Gallery - Document Gallery - Sampling Gallery - Support Space Each of the program components of course has its own requirements and properties that can be seen in Fig. 3.15. Just as in the previous stage of the project all the information has to be thoroughly analyzed in order to determine how it can be interpreted to turn it into the algorithm. The preliminary decision was made (in order to begin approaching this problem) to use the Ant Colony logic to produce the program solving solution. Because of that there is a need to discuss how the information provided by the client (in the form of quantitative and qualitative program) will be used in computation. First, the separate program components will be treated as simple agents and their requirements will become agent properties and general rules for the whole system. The agents will have several properties taken from the program requirements: - Area/Size - Proximity to Each Agent - Solar Exposure req. - Vertical Location (accessibility factor)

60

- Noise req. In order for these properties to be satisfied there is a need for certain targets that will correspond to each property of the agents. They will be introduced also for the purpose of querying the level of success (pheromone-like) after each iteration (evolution) that the system will go through. Based on the response the system will receive the command to either continue the evolution or terminate the loop if the results were satisfactory. The targets will be derived via various analyses that the building mass (system domain) will be a subjected to. For instance to set up targets for the Solar Exposure properties the envelope can be exposed to solar radiation analysis and the resulting values can be run through a code that will map the zone with the BIGGEST and SMALLEST solar exposure, which will become reference targets in the form of geometry. The same logic can be applied for the NOISE targets where instead of solar acoustic analysis values will be mapped on the surface of the building volume.

3.1.1 Program Solver: Preparation The previous chapter (3.1.0) talked about possible solutions and preliminary concepts for creation of a generic Building Program Solver. It highlighted the certain level of complexity that designer has to deal with if building program is treated as a group of unique system participants with their own properties that have to be satisfied for each of them in order to deliver a successful solution for the building program as a whole. This complexity demands a non-linear solution and as it was mentioned earlier introduction of Evolutionary types of algorithms. This chapter (3.1.1) will discuss in

61

detail the logic and methods behind the actual Building Program Solver as well as authors remarks on the process of making it. Grasshopper scripting environment was chosen as a platform for the solution to operate in. In order to wrap the behavior of the system of program components (for the Program Solver) in a code there was a strong need to turn to programming paradigm of Object Oriented Programming (OOP). This is how Wikipedia describes this paradigm: An object is a discrete bundle of functions and procedures, all relating to a particular real-world concept such as a bank account holder or hockey player. Other pieces of software can access the object only by calling its functions and procedures that have been allowed to be called by outsiders. A large number of software engineers agree that isolating objects in this way makes their software easier to manage and keep track of.7 OOP is basically an abstraction of real world things. For example if the system has separate elements like Agents, Targets etc, to create a reusable pieces of code that will group properties and functions that are related to one particular element. This way it is easier to keep a track of the whole system and group the behavior for each participant. However, even though the OOP makes it somewhat easier the process of implementation, Evolutionary Systems are hard to digest and actually start programming because of their nonlinearity. Particularly because of the very limited experience with OOP and pretty much none with Evolutionary Algorithms the authors decision was to try to create a more simple evolutionary based system and write a class library for it prior to diving into a more complex one such as the Program Solver.
7

Wikipedia contributors, "Object-oriented programming," Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/wiki/Object-oriented_programming (accessed May 18, 2010).

62

The practice exercise (for the purpose of getting more familiar with the evolutionary based systems and OOP) was determined to be a simplified and abstracted version of an example that was found in a paper by Peter Testa Emergent Design: a crosscutting research program and design curriculum integrating architecture and artificial intelligence. MIT Artificial Intelligence Lab developed flexible Emergent Design software (toolbox) for the school of Architecture and Planning. The paper describes several experiments that were conducted during a workshop by architectural students. These experiments mainly had to deal with solving different problems (or just observing behavior over time) on urban scale by prescribing desired behavior to different urban elements. The similar but simplified experiment conducted for this thesis was an abstraction of a two-dimensional site that is subdivided equally to a controlled amount of zones with their centroids acting as attractors. Additionally this system has agents, which are dynamic elements. They constantly change their location due to being attracted by a closest strongest zone. Finally, another type of elements in the system is a cell. Cells are the elements that are 1x1 squares that make up a grid system of the site. They serve as a reference for the movement of the agents. Programming-wise there were 5 classes created: - Zone: carries each zones boundary, centroid, current attractor value - Cell: carries similar information as zone (except for attractor value) per each cell - Agent: carries current location of agent (x,y), its trans. vector and path - SystemSettings: this class holds all the default information about the system (amount of agents, zones, attractor value step etc) - SystemMethods: carries all the functions that make the system run

63

Figure 3.16 Systems state after 20 iterations

There is a need to briefly describe the logic behind the behavior of this practice system. User sets up parameters for the system: amount of Zones, amount of Agents, number of iterations, various thresholds etc. Upon initialization (before the first iteration) a certain number of elements of the Agent type are created and randomly placed on the Site. The attractor value of each Zone is either randomly assigned in the range between the minimum or maximum possible values, or all of them get the same value in the middle of the Min/Max range (MinVal+(MaxVal-MinVal)*0.5). Then each zone queries how many Agents at this moment reside within its boundary and if the number is more than the average (AgentAmount/ZoneAmount) it decreases the number associated with its attractor value by a predefined step number (which actually makes this zone a stronger attractor) and other way around if the amount of the Agents within the boundary is less than the average (this makes this Zones attractor weaker). Once the attractor values of

64

each zone are overridden, it is time for the next iteration. Each Agent has to find a strongest attractor for it. This is accomplished by querying the distance to each Zones centroid and multiplying it by its attractor value. Obviously if all the zones have the same attractor value then the closest one wins; otherwise most likely that zone with smallest attractor value (the strongest) will have an edge over its neighbors. Ones each agent found its attractor in creates a translation vector between itself and the centroid of the cell and makes a move towards it. After this the Zones attractor values are recalculated again and the system proceeds to next iteration. This system has a lot of flexibility involved in it like being able to determine initial attractor values of the zones, threshold for the step of attractor values, amount of participants etc. This makes it interesting to observe the final outcome, which otherwise would be almost impossible to predict. Even though this concept is somewhat simplified it can be used to solve or simulate various architecture/urban design problems/behaviors. For instance, during the experiments that MIT students conducted with MITs ALife Toolbox (described in Peter Testas paper) similar algorithms were used to compute how housing in some populated area aggregates around existing schools. Nevertheless, the goal for this practice exercise was to gain some experience of dealing with nondeterministic nature of these algorithms as well as class-library programming experience and these goals had been accomplished.

65

Figure 3.17 Randomly Initialized Agents and the Systems State after 20 Iterations

3.1.2 Program Solver: Solution

This particular Program Solver that is attempted to be created for this part of the thesis is intended to be somewhat universal; however there is a specific program for 66

Student Media Caf Center provided by the client (described in chapter 3.1.0). The idea for the Program Solver (described in 3.1.0) came very close to the actual solution with some technical changes. After thorough planning the concept of the solver will treat the existing building envelope that was generated during the previous phase of the project will be acting as reference boundary for the program components. As it was previously discussed, each program component will be treated as a separate Agent with its properties and requirements for them (mostly having to deal with environmental data) that are taken from specifications of each program component provided by the client. Again, a library of classes had to be designed that would describe the behavior of the solver. The library for final solution consisted of five classes. As it was discussed earlier in the introduction to OOP the classes represent an attempt for abstraction of real life things. Thus, for the sake of clarity it is beneficial to go through each class and describe its content in order to get the understanding of how the final solution functions.

67

Figure 3.18 Description of the Class Library for the Solver

First class is called Agent. It represents a single program component. When it is just created it receives a set of values (taken from the program) that correspond to each property upon each program component will be evaluated. These values should be treated as a perfect-scenario value for each evaluation property (environmental data etc). Apart from the perfect requirements each Agent has a set of variables that store the actual values that correspond to these evaluation properties. The actual values change after each iteration and have to deal with what location on the envelope the Agent is exposed to. There is also a Strict property that each Agent gets assigned by client. It has to deal with how important it is for this program component to satisfy the requirements (Strict property will be covered in more detail when the Score/Evaluation system is discussed). 68

Apart from the properties and requirements each agent contains some geometric information. Basically, the decision was made that the most efficient and reliable way to subdivide space within the system domain (given building envelope) between all the program components is via 3D Voronoi diagrams. This way there is no loss of space involved and the sum of the volumes of all program components will be exactly equal to the volume of the boundary envelope. To be able to generate Voronoi diagrams one needs a domain and a discrete set of points that become drivers for the Voronoi cells. Thus, each Agent contains a point (Voronoi generator) and a resultant Voronoi cell, plus the properties derived from it such as cell volume, percentage of the volume against the whole domain, centroid of the cell etc. Next class is called Target. This name is a bit misleading because it symbolizes something that this class was intended to do (to store a collection of targets) but during the design process of this software the functionality of this class was changed. This class was meant to be a container for reference/environmental data per one property. For example in the case of this exercise there would be an instance of Target class for each of the following: Wind Velocity on Envelope, Direct Solar Radiation on Envelope, Acoustic Incidence on Envelope, Z (height) location within the Envelope. All these data (except for the Z values) was extracted from analysis software after the analysis was conducted for this envelope. Coming back to the name Target initially the plan was to run this data through a script that would analyze it and determine the minimum and maximum values in the set and find their locations on the envelope. Then these two points (Min and Max per each set of data/property) would define (better say influence, in the combination with all Targets) the direction for each Agent that is aiming towards its best location

69

within the envelope. This concept ended up being not a very good idea since it was not taking in consideration the values on the rest of the envelope. Of course, the Min and Max locations were correct but that meant that the rest of the envelope values were not properly represented. Instead, the decision was made to create zones on the envelope (sub surfaces of the whole envelope) whose density was flexible and determined by designer. Then the analysis values on the envelope were averaged per each zone. This way instead of just Min/Max locations per each property, each Target received a set of zones and their average environmental values, thus the envelope was described fairly and now each Agent could query its position in space, find a zone that it is exposed to at the moment and inherit all the values per each one of its properties from this zone. It probably should be mentioned that the most successful and fair subdivision for the zones at least in Z direction would correspond with the floor height. This way if the building has 6 floors there would be same number of zone-rows in vertical subdivision. System_Settings and System_Methods are the following (3d and 4th) classes of the library. System_Settings contains all of the information about the whole system in general that is not meant to be changed during the run. These are parameters defined by designer and client and assigned to the system before the run such as: amount of agents, amount of properties, the domains surface geometry, its volume, thresholds etc. This class it is crucial from the point of view of functionality because all the other elements of the system (Agents, Targets, etc) turn to it to query this important information that it contains. System_Methods is the executive brain of the system since it contains all the methods/functions that run it. It consists of three main parts: part responsible for the initialization of all the elements, random point generator functions, Voronoi related

70

functions and evaluation/score functions (these parts of System_Methods will be covered in later in the logic description). The last class is called Solution. The purpose of it is to store the most successful combination of the program distribution. It needs to be said that initially designer sets a number of iteration for which the solver will run and try to calculate the most suitable programmatic solution. Of course, higher the number of iterations longer it takes for the system to finish the calculations; however the odds of finding a better solution are higher. Another difference that was introduced to the final design of the Program Solver comparing to chapters 3.1.0 (where the general thoughts on how it might work were discussed) is that agents instead of constantly evolving are regenerated after each iteration randomly and the solution after each iteration is evaluated based on the total score of how much it satisfies the settings for perfect scenario. That is when the Solution class comes into play. As it was said, it stores the most successful combination of program components iteration and it also stores a variable with the highest possible score to achieve and a variable with the score of the solution that is stored there (which is the most successful one to that point). Each generated arrangement of program components once every iteration gets evaluated, receives its score and this score is tested against the score within the instance of Solution class. If the score of the current iteration is higher than the one in the Solution it gets overridden and a new fitter program arrangement is stored. Eventually, depending on the presentation and design consideration needs design team or client might need multiple solutions for further consideration rather than just the one with the highest score that currently gets stored in the Solution class. Even thought the configuration of the successful solutions are driven

71

by the properties and requirements per each specified program component (thus the solution delivered by the solver should be a satisfactory one), some other issues might arise after the solver gave the single highest evaluated configuration. In this case having a variety of solutions with relatively high score might come in very handy for the human evaluation by the design team and the client. This is not the issue with this particular solver; the code involved in it is flexible enough to be easily adjusted so it can provide user with this option.

Figure 3.19 Logic behind Agents Property Evaluation

One last thing is left to explain before the full step by step sequence of actions of the Program Solver can be described and it is the way the evaluation/score assigning

72

algorithm works. It will be demonstrated on the example of one Agent and just one of its properties/requirements, for instance solar exposure property (each Agent in the precedent described in this paper has four properties). The figure 3.19 diagrammatically demonstrates step by step how the evaluation occurs. First, the requirements for properties of each program component (user input) are set on 1-10 range. This is generic

Figure 3.20 Requirement Sliders for Properties of an Agent

interface for the solution because user might not even be aware of the range of values per zone that each of the environmental analyses returned. Instead he/she can chose whether the desired location for a given program component has to be either exposed to highest values (slide to 1), lowest (slide to 10), or something in this range. The software then goes to instance of Target class that stores solar exposure data and queries the numeric domain for all the values and the Min/Max values. Based on that it calculates where this requirement value (1-10 range) falls in the actual range of available solar radiation values and creates this perfect / desirable value for solar radiation for this program components. Next, based on the Strict property of this component the software evaluates within how big is the chunk of this exposure values domain this program component can receive ANY score. Conceptually the Strict value is the indication of how crucial it is for this program component to satisfy this particular property and how it will affect the success of the whole system. So, if the Strict equals 5 then the chunk 73

of the domain that will possibly receive a score higher than zero will be a half of 1/5 of the domain on each side of the earlier calculated desirable value for this property. If the Strict is 4 then 2/5 of the domain will be used etc. Ones the Strict sub-domain is determined the software actually proceeds to score assignment by following the diagram for step #3 of Fig. 3.19 if the real value for solar exposure is not within this Strict subdomain then the Agents immediately receives the score Zero for this property. If it falls inside of Strict then software assigns 7 and keeps diminishing the sub-domain until either the real value falls off or the score of 15 is reached (the highest possible), which would mean that the real value is extremely close to what the requirement was. The software does the same set of operations for each of the values and sums the score for this particular agent, then all the agents scores are combined, which becomes the success evaluation score for the whole run (this score gets tested against the score in the Solution to see who is a better fit). At this point most of the main details regarding the Program Solver algorithm are covered and the explanation can finally proceed to the sequence of the main steps that the solver has to go through in order to arrive to a solution (Fig. 3.21).

74

Figure 3.21 Algorithm Steps

The solution consists of 3 main parts. First one is the initialization. In this part the software takes care of collecting the entire user input data and it initialize (creates and assigns default/initial values) all the participants. The next part is the loop that runs for as many iterations as user wants to. Within the loop first procedure is responsible for generation of a set of random points within the envelope boundary. These points become generators for the Voronoi diagram and the amount of those equals to the amount of agents. After the points are generated the Voronoi algorithm gets called and it generates the Voronoi cells and assigns one per each agent. Then the evaluation block starts. The following procedure evaluates if the newly created cells of the Agents satisfy their Volume property requirement within a certain (user defined) threshold. This is simply 75

accomplished by querying each Agents cell volume and then run: (CellVolume / EnvelopeVolume)*100. If the volumes of all cells satisfy the Volume property requirement then this iteration is passed to the actual Score Assigning procedure where it will definitely receive some score; otherwise the loop skips to next iteration (random point generation etc.). As one might notice that conceptually the priority is given to the Volume/Area property of program components. The author here emphasizes the importance of square footage property since most often it is the primary demand in program distribution. In this case regardless of the program solution is returned in the end, it will always precisely satisfy the area/volume demand of the program components. However, considering that sometimes (very rarely, though) the satisfaction of the area/volume requirement is not a primary priority and in this case (as it was mentioned earlier) the system left to be flexible enough to set up any requirement parameter instead of area/volume. If the iteration has satisfied the Volume requirement then it is passed to the evaluation procedure that was discussed earlier (Fig. 3.19). Next and last step of the loop is to compare the score of the current iteration with the score of the most successful to date program configuration that is stored in the Solution. If the score of the current iteration is higher the Solution will be overridden. It will receive all the data that designer needs cell, volume, score and general diagnostics of this particular program configuration. Last (third) part of the solution deals with outputting the data (geometry and statistical data) to the user and its visualization. In addition of functioning in 3D the solver is capable of dealing with programming and further space subdivision in 2D as well. Eventually once the larger

76

program components have to be broken down into the actual spaces this comes in really handy. The 2D subdivision would work based on the same principle as the 3D one, however the solver would be aware of the floor plates. In general, this solution is flexible and can be adjusted for virtually any number of requirements and any resolution of the program whether it is wide scope of program components (covered in this exercise) or more detailed one for instance solving a configuration for surgical ward. It cannot be run blindly without the understanding of the goals and requirements for the solution that needs to be found; it does need thinking and understanding of requirements from designer to be able to properly formulate a question.

3.1.3 Circulation and Egress There is an additional functionality that the Program Solver has beyond finding a suitable program configuration based on a building envelope/boundary and program requirements provided by client. The actual 3D space subdivision algorithm described in chapter 3.1.2 does the heavy computation part, but once the system determined the volumes for each program component it automatically projects it on the floor plates. The really important part here is that no data is lost during this process it just goes form one state to another, becoming more and more refined, from 3D Voronoi diagram and dealing with volume ratios in respect to the whole envelope to 2D projections on the floor plates and areas. There is a full control over the square footage of every program element on every floor level.

77

Figure 3.22 Manipulations with Primary Circulation and Egress

The Voronoi diagrams are very often used for finding closest distances and efficient subdivision of space. It suites quite well the purpose of building programming. The primary circulation in the building can be described by the shared edge boundaries of the neighboring cells in 2D. The interesting property of the Voronoi distribution is that there is always access to the outer boundary of the envelope. This opens up a great possibility for placing/organizing egress locations in these points relative to the envelope and carrying them to the ground level. Basically it turns back from 2D diagram to 3D. The Voronoi circulation schemes automatically do not produce any dead ends, which makes them very efficient. Additionally user gets the control over the ratio of the primary

78

circulation, so by setting the required percentage the circulation will adjust itself accordingly.

79

Chapter 3.2

Building Systems

80

3.2.0 Smart Building Systems. General Thoughts As it was mentioned in the earlier chapters the attempt of this thesis was to mimic the usual flow of an architectural project and as it moves along it would be narrowed down to more and more detailed development. This chapter will transition from dealing with the building program in general to a discussion about building systems and their components. The research on this stage of the project will be concentrated on smart/dynamic/adoptive building systems from the point of view of their performance as well as the actual process of their design and development. In order to perform on a high level building systems do not necessarily have to be dynamic. There are quite a few examples of buildings that utilize static systems that are a product of very thorough analysis and planning (whether it is environmental or human factor or human comfort analyses). As a result these systems demonstrate fantastic performance and by getting the most out of their design the building becomes very efficient, inexpensive to operate and maintain. In addition to that well thought out building systems contribute a great deal to user comfort, which is definitely as (or more) important then the reduction in buildings maintenance and operation costs. Understanding specifics of surrounding environment is a key requirement for a design of a well performing building. A good example to that is vernacular (low-tech) design. Vernacular architecture is in itself the reflection of surrounding context and a product of knowledge and observations that have been collected and passed along over generations. It is this collective knowledge along with trial and error method that let inhabitants to achieve high performance buildings with minimal technological interferences. However, the surrounding environment (which has a major effect of how buildings perform) is not static by nature. Its factors constantly vary whether it is time of 81

the day difference of solar exposure values, radical difference in different seasons temperatures in the places far away from the equator, banal unpredictability in weather changing patterns, or amount of people present in same space depending on event. Normally, the desired requirements to space comfort (as an example of just one of many things that building systems have to deal with) even if it slightly varies overtime or depending on different personalities, definitely do not change as often as the environment. Thus, it is obvious that static hardware (that is basically what the static building systems are) cannot adjust itself properly to many conditions instead it can only be designed to perform in the desired way under several most common and crucial scenarios. For instance static louvers or overhang shading that (depending on the local sun angles) allow blocking direct solar rays in summer and allowing their access in winter. Considering this it would be fair to suggest that dynamic systems would have the edge over the static ones just because they can be set up to recognize and to analyze what exactly is going on around them. Then, based on this information they immediately respond by adjusting their behavior in order to maintain consistent characteristics high performance. In general, it really does not matter as much what type of system it is, whether it is glare control faade system, interior climate control mechanisms, or mechanical circulation systems. Any of these types have to satisfy several a must conditions in order to not only be considered well designed and well performing dynamic building systems but just simply to be agreed upon by client to be utilized. They have to satisfy these criterions: - have to have some devise/set of devices that let it communicate and evaluate surrounding environment (for instance a network of sensors)

82

- have to have a brain (software that describes the behavior of the system) that processes input and sends commands to hardware - the hardware has to have enough flexibility to accommodate all possible software commands due to all possible environmental changes - has to be cost efficient to have a reasonable payback period: cheap to design (parametric controls and PM), manufacture (as little custom components as possible), maintain/operate

3.2.1 Smart Building Systems. Glare Control, Aperture Curtain Wall The goal for this part of the thesis was set to experience a hands-on design of a rather detailed dynamic building system component based on the principals and the criterions for dynamic systems that were covered in the end of chapter 3.2.0. The choice was made to develop a system of aperture-based faade panels as well as a built-in cost/quantity/performance control project management tool and a method of populating it over a doubly-curved faade surface. Figure 3.23 describes the goals that were set per each separate conceptual gradation of this curtain wall development. These parts of the project include the actual DESIGN PROCESS and everything that is related to it, the development of the SOFTWARE to run the aperture system (the logic behind it and its functionality), and finally the ECONOMICAL aspects of the project development.

83

Figure 3.23 Design/Software/Economy Goals

3.2.1.0 Design Process One of the goals for the actual design process of the aperture curtain wall was to get the most use out of out of the functionality and flexibility of the software (Grasshopper for Rhino4) and to create a fully parameterized digital prototype of a single instance of the curtain wall. There are several reasons for that. Some of them have to deal with actual economy and cost reduction on several levels such as detailed control over materials throughout the whole project and parametric relationships in between all the parts, which allow for quick minor and micro changes without having to redo the whole drawing set (however, the economical benefits will be described in more detail in chapter 3.2.1.b). Another benefit of parametric links between all the components is that instead of 84

being some generic line work or surfaces they actually become a meaningful assembly (set of smart objects) that contain all the necessary data about themselves (which can be queried at any time) in virtual environment. This way of designing a product leads to better understanding of the object and its components (as well as their interrelationships) by designer.

Figure 3.24 Exploded Assembly

Figure 3.24 displays an exploded view of the assembly. The design of the aperture unit is based on the combination of static (no motion) parts and those that are design to adjust their position according to the changes that the surrounding environment undergoes (in this case it is the location of the sun). The base of the assembly is a 85

quadrilateral faade sandwich cladding panel that is connected directly to the main faade structure. Based on the assumption that faades to be populated with this system are doubly curved it makes the quad panel to be the only custom part in the whole assembly. The sandwich panel has a slot/opening that hosts the actual aperture component that incorporates all the functionality of the system. Its geometry is based on a circle whose radius represents the radius of the actual fenestration opening. The base of the aperture sub assembly is the frame element. It is important because it connects the dynamic part of the system to the cladding component and thus to the structure. It is connected to the cladding quad via variable amount (defined by an engineer) of bolted plates. The dynamic part of the assembly consists of so-called blade sub-assembly. The blades are those elements that make up the actual aperture and geometrically are segments of a circular patch. Each of the blades has three arc-like sides and 3 rounded corners. The material for the blades is vinyl-coated polyester mesh fabric that has a 91% of glare reduction. When the aperture is fully closed it is supposed to restrict penetration of almost all the direct sun radiation and prevent the interior from overheating when this effect is desired (for instance hot months). It substantially reduces the loads on air conditioning system. The blades are hosted by the main dynamic mechanism a combination of servo motors that are connected to distributing microcontroller board for each aperture and pivot cylinders with forks that hold the actual blades. There interior part of the frame is manufactured the way that it has special slots with step-like offset that host the servo motors. The step-like offset accommodates the clearance that the blades need so they dont interfere with one another when the aperture begins to open (avoiding clashes). The pivots are attached to shafts of the servo motors. When the

86

motors receive signal (in the amount of degrees of rotation) from the microcontroller board - the whole mechanism comes in action and the aperture begins to close or open. Lastly, the frame has two profiles one on exterior side and one on interior. These profiles hold exterior and interior glazing; this way the aperture is enclosed in double glazing, which protects the dynamic mechanism.

Figure 3.25 Panelization Logic

Another goal for the design process part was to come up with a mechanism that could take a doubly-curved design surface of a faade and populate it with instances of the actual aperture cladding component as well as to develop the structure for the system. The easiest solution for this problem was to have a base for the aperture cladding panels instead of quadrilateral to be a triangle. This would allow for an easy algorithm of surface 87

tessellation with triangles. However, the problem with this configuration is that triangular panels would significantly decrease the ratio of glass to solid/non transparent surface of the faade (as well as increase quantity of aperture assemblies), which would lead to a reduction in amounts of the natural light penetrating a building. Quadrilateral panels after a few tests proved to have the edge over the triangular panels by the amount of glass surface that they can possibly host. That is why the decision was made to use quads versus triangles. This decision led to another complication with the panelization/tessellation algorithm. It is much more complicated to subdivide a doubly curved surface with planar quadrilateral surfaces and to follow the shape of the original surface. Figure 3.25 displays a diagram for the flat quad panelization algorithm that was written for this particular curtain wall system. The logic behind it is that user can control desired U and V subdivision of initial faade surface and keep adjusting it to the point that the desired ratio is found (note that even though the logic is described as a step by step process, the actual surface subdivision, the panelization, and population of the surface with the curtain wall components is happening in the real time). This produces a set of curved sub-surfaces from which the algorithm extracts 4 corner points. In order to create a plane one needs 3 points in space. The next step the algorithm goes through is to take three out of four points of each sub - surface and create a plane on which the forth point is projected. These four (now coplanar) points are used to create a flat quad and two triangles (to the only point that was a projection to the plane). The way that the hierarchy of parametric relationships is set up in the aperture assembly component is that it only needs four coplanar points to be populated (these four points are used to create the shape for the sandwich panel). The rest of the aperture component which is predominantly

88

based of circles of different radiuses generates itself having its reference (as a center for all the circles) to be the centroid of the four reference points of the quad. This way ones the flat quads are generated on the base surface the algorithm passes sets of four coplanar points to the smart component and the surface gets populated. The inset of the component defines the structure of the curtain wall (Figure 3.25).

Figure 3.26 Partial Faade Population

89

3.2.1.b Economy This dynamic/responsive curtain wall system was designed in the way that it would make sense from the economical point of view. Coming back to the major requirements for a successful building system (mentioned in the end of chapter 3.2.0) in order for it to be successful it is not enough solely to perform on a very high level. The actual design/manufacturing/maintenance costs have to be low enough to result a reasonable payback period. Considering that average building life-span would not exceed 40 years and on average dynamic systems would consume more technological resources (thus generally more expensive than static ones) developing a successful product of this kind that would stick on the market and would attract clients might become a very complicated task. One of the things that can make a faade energy system (like the one that is described here) is a large amount of parts that would require custom manufacturing. This is almost impossible to address when dealing with double curvature. The way that this problem was addressed in the design of the aperture faade cladding was to make all the components with the high manufacturing cost modular. The most expensive part of the assembly described in the previous chapter is concentrated within the aperture sub-assembly (everything within the circular frame). Because of that the parametric model was set the way that all these components no matter what will always be exactly the same with absolutely no customization needed. The only custom part of the assembly is the sandwich quad panel, which turns out to be relatively not expensive. Another strategy for cost reduction was to avoid any curved surfaces, which is another factor that dramatically increases the manufacturing cost. The described assembly has only planar and rolled components.

90

Another cost reduction strategy that has to deal with the design process is the parametric functionality of the digital model (which was mentioned earlier in the Design Process). All the parts of the model as well as their quantities are completely flexible and can be adjusted at any time just by changing values. This way there is no constant redrawing needed because any changes of configuration at any level and any small or big adjustments are just the matter of adjusting numeric values. This does reduce the amount of labor invested into the project as well as reduces the duration of the development phase. All this of course reduces the products cost. This type of workflow allows designer/architect to deliver the concept and to describe the behavior of the system and then transfer this extremely flexible virtual prototype to engineers that can quickly make their adjustments and performance tune-up and send it out for testing and manufacturing with no delays. In addition to the flexibility that parametric functionality adds to the design process, the smart component is extended by a built in project management tool. The project management tool for the aperture curtain wall system is a part of the working model file. It recognizes all of the different parts that the assembly is made of and it collects information about the amount of parts of each type in the assembly as well as in the populated curtain wall system. It sums up all the information like materials, dimensions, cost, total cost (and other) and constantly updates spreadsheet where it overrides this information as soon as any change is made. This way there is always comprehensive information about the project. The PM tool eliminates the need of having to collect and calculate all this information manually as well as a need to add additional staff for these particular bookkeeping tasks, which in turn reduces the cost of the final

91

product. Figure 3.27 demonstrates a partial view of the spreadsheet that contains data about already populated faade.

Figure 3.27 Embedded project Management Tool/Functionality

As one can see from the Figure 3.27 the quantity take-off is not the only type of information that this PM tool provides. Due to the dynamic nature/behavior of this curtain wall assembly and the flexibility in the adjustment of the parts there is a need to check if the aperture component in any configuration can properly open/close without the parts clashing. The clash detection is another type of information that is constantly being overridden in the spreadsheet. Basically, there is a possibility that due to increase/decrease of the amount of blades the maximum allowed angle of their rotation 92

changes. The clash detection part of the software recognizes when the parts clash and writes the report that the current configuration produces a clash. In addition a similar functionality provided for the population and adjustment of U and V subdivision of the surface. In order to have the parts of the apertures all the same the parameter that controls the main radius of the aperture is independent and user - defined. In addition to that the quads produces by the surface panelization are not the same. Some of them can be larger or smaller than others, plus they can have different side length ratio. This means that when some quads can be of a satisfactory size to host the apertures of a set radius, at the same time some of them can be not large enough. The project management tool has an evaluation algorithm with a multiple evaluation parameters that evaluate whether each quad is a satisfactory host by its size. Then it writes the report per each quad so the designer can see which ones fail to host the aperture and by what amount. This also tells designer that the surface subdivision has to be decreased until all the quads satisfy the hosting parameters. This embedded mechanism technically allows populating properly any faade by avoiding any clashes in a matter of minutes, without having to deal with time-wasteful and much less precise manual clash diagnostics.

3.2.1.c Software This part of the aperture curtain wall description will explain the requirements and the logic behind the software that was designed to run and control its behavior. The software is what makes this system (or any similar one) truly dynamic and responsive. Some of the requirements for this type of software include being able to have a support

93

for a system of sensors, to be able to receive and process sensor data, and to host algorithms that would properly respond to particular environmental changes. Even though sensors were not discussed earlier they are basically one of the most important elements of this assembly. This particular system (as well as any other responsive building system) in order to operate independently (without human interference) has to have some kind of way of interaction with the surroundings; something that would immediately report the changes in the environment and deliver the exact type of data (acoustic/solar/temperature/etc) that is needed for the system to make decisions regarding how to adjust itself. The sensors are talked about in this section that is supposed to be talking about the software because they play the role of the interface.

Figure 3.28 Aperture in Action. Sun Tracking Diagram

94

The way the aperture assembly is intended to act is to respond to the solar angle and the light intensity. Basically, solar intensity depends on the incidence angle between the sun and surface of incidence. In this case this surface is considered to be the window plane of each aperture (after all that is how the light gets in the building). The peek heat gain occurs when the angle of incidence approaches 0 degrees (sun is at 90 degrees relatively to the plane of incidence). Even though (due to the faade curvature) the aperture/window openings are at different angles there is really no need for each single panel to have a personal sun tracking device. One per building is quite enough. The trick is that because the panelization was digitally computed each aperture receives its own unique plane of incidence (that is how the panels where populated in the first place). Then there is only one reference plane per building that corresponds to the sun tracking device. The device measures the angle of incidence for this plane with a 1 minute interval and automatically computes the unique angle for each of the planes in the system. It also computes whether the sun is on the opposite side of each plane and in this case these apertures will stay open since they are not exposed to any direct solar radiation. As for those planes that are on the sunny side, the software computes the angle between 90 0 degrees for each of them and then passes it through the formula that rescales this value to the numeric domain of possible rotation of servos in degrees (for instance from 0 being completely closed to 40 being completely open). The range of rotation depends on the specifications, configuration of the apertures and the amount of blades, but it is generally in the 0 to 35-50 degrees. Another type of sensor data that the software needs to compute the angle of rotation is the solar intensity factor. The data from this sensor passed to the

95

system is treated as overcast multiplier. The maximum solar radiation that Earth surface receives with clear sky is around 450 W/m2. However, on a same day with overcast sky there is only diffused sky radiation coming in. When this phenomena occurs the sun intensity is measured from around 1/6th to 1/1000th of the maximum solar intensity. In this particular case there is no direct radiation to block, so the apertures would stay open. The way the software treats this data is mapping it in 0-1 range where 1 is the highest solar radiation (clear sky) and 0 is 1/6th and smaller (overcast sky). Then this value is used as a multiplier that gets plugged in to the main formula, thus even if the sun is at its peak relative to an apertures plane the servos will receive a command to completely open if it is under the overcast sky conditions. Figure 3.28 demonstrates how the formula works under clear sky. The black arrows are the normal vector of the plane and the vector that tracks the sun, between which the angle of incidence is measured (28 and 84 degrees incidence).

96

Part 4

Conclusion

97

Data is a powerful resource. The outcome of this thesis project supported the idea that utilization of relevant analysis data in architectural design allows to turn it into a more robust and efficient process. The ability to carry meaningful project related information and to utilize it along the way helps to deliver well supported and intelligent design solutions. However, this can only be achieved with a thorough analysis of components that have affect on a successful outcome of architectural design project as well as by turning to advanced computational techniques and methods. During the course of this thesis the author developed multiple design tools and methods for different project stages. Although these methods had different design goals, each of them had a similar underlining objective of having to utilize every bit of meaningful analysis data for the purpose of driving design solutions. On the data collection side the emphasis was put on digital analysis tools (fluid dynamics and environmental analysis software). In order to be able to directly stream the data in the design process the further methods heavily relied on modeling software that supported parametric relationships, as well as scripting and programming without which it would be almost impossible to quickly process large sets of information. These tools and methods made possible to rationalize the design output by tying together in one stream such abstract things as designers ideas, clients feedback and inputs, precise environmental data, site information and fabrication opportunities.

98

99

100

Bibliography

AKT, Michael Meredith, Aranda-Lash, and Mutsuro Sasaki. From Control to Design: Parametric/Algorithmic Architecture. New York, NY: Aktar-D, 2008.

Ceccato, Cristiano. Integration: Master [Planner / Programmer / Builder]. Paper presented at the IV Generative Art International Conference, Milan, Italy December 11/14, 2001.

Gilmour, Stephen. Understanding the Pheromone System within Ant Colony Optimization. Paper presented at AI 2005: Advances in Artificial Intelligence, 18th Australian Joint Conference on Artificial Intelligence, Sydney, Australia, December 5-9, 2005.

Loukissas, Yanni. Rulebuilding: Exploring Design Worlds through End-User Programming Masters thesis, MIT, 2003.

Mitchell, William J.. The Logic of Architecture Design, Computation, and Cognition. Cambridge, MA: MIT Press, 1990.

Peter Testa, "Emergent Design: a crosscutting research program and design curriculum integrating architecture and artificial intelligence," Environment and Planning B: Planning and Design 28, no. 4 (2001): 481-498.

Wikipedia contributors. "Object-oriented programming," Wikipedia, The Free Encyclopedia. http://en.wikipedia.org/wiki/Object-oriented_programming (accessed May 18, 2010).

101

You might also like