You are on page 1of 90

®

Mission Statement Staff


Welcome
The Tower is an interdisciplinary research journal for Undergraduate Reviewers
undergraduate students at the Georgia Institute of Michael Chen Helen Xu
Technology. Andrew Ellet Shereka Banton
Zane Blanton Alexander Caulk
The goals of our publication are to: Azeem Bande-Ali Katy Hammersmith
■ showcase undergraduate achievements in research; Ian Guthridge
■ inspire academic inquiry;
■ and promote Georgia Tech’s commitment to under
Graduate Reviewers
graduate research endeavors.
Yogish Gopala Michelle Schlea
Dongwook Lim Rachel Horak
Lisa Vaughn Rick McKeon
The Editorial Board
Chuyong Yi Shriradha Sengupta Hilary Smith
Editor-in-Chief 2008-2010 Nikhilesh Natraj Partha Chakraborty
editor@gttower.org David Miller
Michael Chen
Editor-in-Chief 2010-2011 Production
editor@gttower.org J.B. Sloan Grace Bayona
Emily Weigel Matthew Postema
Associate Editor for Submission & Review
review@gttower.org
Web Master
Angela Valenti Andrew Ash
Associate Editor for Production
production@gttower.org
Erin Keller Business
Business Manager Jen Duke
business@gttower.org
Dr. Karen Harwell
Faculty Advisor
advisor@gttower.org

Copyright Information
© 2010 The Tower at Georgia Tech. Office of Student
Media. 353 Ferst Drive. Atlanta, GA 30332-0290
Acknowledgements
Special Thanks Dr. Manu Platt Biomedical Engineering, GT/Emory
The Tower would not have been possible without the Dr. Lakshmi Sankar Aerospace Engineering
assistance of the following people: Dr. Franca Trubiano College of Architecture

Dr. Karen Harwell Faculty Advisor, Undergraduate


Review Advisory Board
Research Opportunities Program (UROP)
Dr. Ron Broglio Literature, Communication & Culture
Dr. Tyler Walter Library
Dr. Amy D’Unger History, Technology & Society
Dr. Lisa Yasek Literature, Communication &
Dr. Pete Ludovice Chemical & Biomolecular
Culture
Engineering
Dr. Richard Meyer Library
Mr. Michael Laughter Electrical & Computer
Dr. Kenneth Knoespel Literature, Communication
Engineering
& Culture
Dr. Lakshmi Sankar Aerospace Engineering
Dr. Steven Girardot Success Programs
Dr. Jeff Davis Electrical & Computer Engineering
Dr. Rebecca Burnett Literature, Communication &
Mr. Michael Nees Psychology
Culture
Dr. Han Zhang College of Management
Dr. Thomas Orlando Chemistry & Biochemistry
Dr. Franca Trubiano College of Architecture
Dr. Anderson Smith Office of the Provost
Dr. Milena Mihail Computing Science & Systems
Dr. Dana Hartley Undergraduate Studies
Dr. Rosa Arriaga Interactive Computing
Mr. John Toon Enterprise Innovation Institute
Mr. Jon Bodnar Library
Dr. Milena Mihail Computing Science & Systems
Ms. Marlit Hayslett Georgia Tech Research Institute
Dr. Pete Ludovice Chemical & Biomolecular
(GTRI)
Engineering
Dr. Monica Halka Honors Program & Physics
Dr. Han Zhang College of Management
Mr. Michael Nees Psychology
Mr. Jon Bodnar Library
Ms. Marlit Hayslett Georgia Tech Research Institute
(GTRI)
The Tower would also like to thank the Georgia Tech
Mr. Mac Pitts Student Publications
Board of Student Publications, the Undergraduate Re-
search Opportunities Program, the Georgia Tech Li-
brary and Information Center, and the Student Govern-
Faculty Reviewers ment Association for their support and contributions.
Dr. Suzy Beckham Chemistry & Biochemistry
Dr. Tibor Besedes Economics
Dr. Wayne Book Mechanical Engineering
Dr. Monica Halka Honors Program & Physics
Dr. Melissa Kemp Biomedical Engineering, GT/Emory
Dr. Narayanan Komerath Aerospace Engineering
Mr. Michael Nees Psychology
Letters from the editors
Seneca once said “Every new beginning comes from sible. I would especially like to thank our faculty advi-
some other beginning’s end,” and as I pen this last let- sor, Dr. Karen Harwell, for her indispensable wisdom
ter as the Editor-In-Chief of The Tower, the quote seems that helped me grow into the editor that I needed to
truer by the minute. Reflecting back on working with be, and our Director of Student Media, Mr. Mac Pitts,
The Tower for the last three years, I am flooded with and his assistants, Ms. Nyla Hughes and Ms. Marlene
fond memories. Beard-Smith, for their support and guidance. Lastly, I
would like to thank the Georgia Tech Board of Student
I remember the weekly Friday meetings where Henry
Publications for its immense support, the Georgia Tech
(former Associate Editor for Production who had a
Library for providing us with an online platform that is
knack for design), Joseph (former Webmaster who
integral in the workings of the journal, SGA for provid-
seemed to be able to translate any idea into codes), and
ing a generous funding for our print journals, and the
I worked until midnight in our corner office laying out
Omega Phi Alpha service sorority for providing us the
articles with an occasional Taco Bell break. Then there
manpower when we needed it the most.
was the occasional lunch break where Martha, Henry,
Erin and I, despite the name of the meeting, skipped Thank you to all who were involved in The Tower, and I
our lunch to have a board meeting. I remember, of wish much luck to Michael and his new editorial board!
course, the second editorial board and our couch meet- I believe in you guys!
ings where we shared our personal moments in which
Best,
we grew closer than we ever would have been able to by
simply working.
This, I believe, is why The Tower saw such success in its
first three years. Everyone that was involved in the jour-
Chuyong Yi
nal had a personal connection either to the journal or to
those working on the journal. Such an intimate working Editor-In-Chief, 2008-2010
environment allowed each member’s passion and moti-
vation for the journal to propagate freely when the team
needed it most. We performed not for the position that
we each filled but for each other.
The Tower and I had a rather symbiotic relationship—as
it grew, I grew. I sincerely hope that Michael, the new
Editor-In-Chief, and his new editorial board will enjoy
learning from the journal as much as I did. I have full
faith that, upon my departure, Michael and his team
will bring forth a new beginning in which they will take
the journal to a higher level of success.
I would like to thank each and every individual re-
viewer (faculty and student), production team member,
and business team member who made this journal pos-
Letters from the editors
The first time I met Chuyong Yi, our outgoing Editor-In- Special thanks to our staff reviewers who endured read-
Chief, she was interviewing me at the Barnes & Noble’s ing submission after submission without rest, our pro-
Starbucks. Chu was full of words and highly energetic, duction team who spent countless hours designing the
so I naturally thought it was the caffeine talking. After journal, and our business team members. Thanks to the
our next couple of encounters, I realized that this was editorial board for their repeated commitment to The
actually her personality. During my time at The Tower, Tower, the Library for providing us an online journal
I saw that The Tower was a reflection of our Editor-In- system, and the Student Government Association for
Chief ’s personality—bursting full of enthusiasm. making The Tower’s print journal possible. Finally, I es-
pecially thank Mr. Mac Pitts, the Director of Student
Chu presided over the publication of our first two print Media, for advising us in tough times and his assistants,
journals. I would like to congratulate her for being our Marlene Beard-Smith and Nyla Hughes.
leader during the greatest expansion our journal, The
Tower, has ever seen. Under Chu, The Tower stayed true Keep on the lookout for new material, whether that
to its mission of showcasing undergraduate research be online or in print, from The Tower. I encourage you
and providing a learning tool to aspiring scientists. Our to visit our website, gttower.org, for more information
journal, initially started by a small group of undergradu- about what we do and how you can get involved.
ates with a tremendous passion for academic research,
now contains nearly 40 staff members. Without her Cheers,
hard work and the hard work of each of our staff, our
goal of providing an outlet to undergraduate researchers
would not be possible.

Sadly, another member of our family is leaving. Known Michael Chen


for her tremendous enthusiasm for undergraduate re- Editor-In-Chief 2010-2011
search, our faculty advisor, Dr. Karen Harwell, is leav-
ing Georgia Tech to pursue other career aspirations. A
mentor to all of us here at The Tower, she will be sorely
missed.

I hope to carry on the work of Chu, Dr. Harwell, and


our founding members by increasing the publication
frequency of our print journal and broadening the mate-
rial that we publish. I plan to increase our collaboration
with student publications dedicated to college-specific
research news. I will work to increase our internet pres-
ence so that our journal is accessible anywhere and at
anytime. With the hard work of our top-notch staff and
the advice of the Student Publications Board, I hope to
further expand The Tower to greater heights.
Getting involved
Call for papers
The Tower is seeking submissions for our Fall 2009 issue. Papers may be submitted in the follow-
ing categories:

Article the culmination point of an undergraduate research project; the


author addresses a clearly defined research problem
Dispatch reports recent progress on a research challenge; narrower in scope
Perspective provides personal viewpoints and invites further discussions through
literature synthesis and/or logical analysis

If you have questions, please e-mail review@gttower.org. For more information, including de-
tailed submission guidelines and samples, visit gttower.org.

Cover design constest


This year’s cover was designed by Esther Chung.
The Tower is looking for its next cover design. Submission deadline is February 5, 2011. Top de-
sign will win $50 and a t-shirt. Get creative! The template is available at gttower.org. Please avoid
copyrighted images. Final designs should be submitted to production@gttower.org.

Staff
Want to be involved with The Tower behind the scenes? Become a member of the staff ! The Tower
is always accepting applications for new staff members. Positions in business, production, review,
and web development are available. Visit gttower.org or email editor@gttower.org for more infor-
mation on staff position availabilities.
Table of Contents
Perspectives
9 value sensitive programming language design
Nicholas marquez

17 synthetic biology: approaches of


engineered biology
Robert louis fee

Articles
25 Network forensics using piecewise polynomials
Sean Marcus Sanders

37 Characterization of the biomechanics of the


gpiba-vWf tether bond using von willebrand
disease causing mutations r687e and wt vwf a1a2a3
Venkata Sitarama Damaraju

47 moral hazard and the soft budget constraint:


a game-theoretic look at the primal cause
of the sub-prime mortgage crisis
akshay kotak

57 Compact car regenerative drive systems:


electrical or hydraulic
Quinn LAI

71 Switchable solvents: a combination of


reaction and sepArations
Georgina w. schaefer
Value Sensitive Programming
Language Design
Nicholas Marquez
School of Computer Science
9
Georgia Institute of Technology

A programming language is a user interface. In designing a system’s user interface, it is


not controversial to assert that a thoughtful consideration of the system’s users is para-
mount, indeed consideration of users has been a primary focus of Human-Computer
Interaction (HCI) research. General-purpose programming language design has not
had much need for disciplined HCI methodology because programming languages have
been designed by programming language users themselves. But what happens when
programmers design languages for non-programmers? In this paper we claim that the
application of a particular design methodology from HCI – Value Sensitive Design –
will be valuable in designing languages for non-programmers.

Advisor:
Charles L. Isbell
School of Computer Science
Georgia Institute of Technology
Spring 2010: The Tower
10
Introduction ings within an environment. In designing this language,
A programming language is a user interface. In design- we believe that working closely with our intended us-
ing a system’s user interface, it is not controversial to ers is crucial to the development of tools that will meet
assert that a thoughtful consideration of the system’s their needs and be adopted. To guide our design interac-
users is paramount. Though there is a large body of re- tions with our users we are applying the Value Sensitive
search from the Human-Computer Interaction (HCI) Design (VSD) ( Friedman et al. 2006; Le Dantec et al.
research community studying just how best to consider 2009) methodology from HCI. In this paper we give a
a system’s users in the design of its interface there is little short description of VSD and discuss how it may be ap-
history of applying any of these methodologies from plied to the design of our programming language. This
HCI to the design of general-purpose programming work is currently at an early stage, and our understand-
languages. Ken Arnold has argued that, since program- ing and application of VSD is evolving. Nevertheless we
mers are human, programming language design should believe that the application of HCI methodologies in
employ techniques from HCI (Arnold 2005). While general, and VSD in particular, will be extremely valu-
there has been some work in applying HCI to the de- able in the development of languages and software tools
sign of languages for non-programmers, for example, that are intended for non-programmers, that is, profes-
for children’s programming environments (Pane et al. sionals for whom programming is an important activity
2002), general purpose programming languages have but not the primary focus of their work.
not suffered much from a lack of HCI methodology
In the next section we provide a brief description of
in their design because programming languages have
Value Sensitive Design (VSD), then we propose a way
been designed by programmers, for programmers. In
of applying VSD to programming language design and
other words, programming language design has not had
conclude with a discussion of how we are applying it in
much need for disciplined HCI methodology because
our own language design project.
programming languages have been designed by pro-
gramming language users themselves. But what happens
when programmers design languages for non-program-
Value Sensitive design
mers? How does the language designer know which
In this section we briefly describe VSD as detailed in
design decisions to take? We claim that these questions
Friedman, et. al. (2006). We begin with their definition
can and should be answered with the help of a disci-
of VSD:
plined application of design methodologies developed
in the HCI community.
Value Sensitive Design is a theoretically ground-
We are designing a language for non-programmers who ed approach to the design of technology that
use computational models in the conduct of their non- accounts for human values in a principled and
programming work, in particular social scientists and comprehensive manner.
game developers who write intelligent agent-based pro-
grams. Agent-based programming has, as one of the pri- In this context, a value is something that a person con-
mary abstractions, “agents” who interact with each other siders important in life. Values cover a broad spectrum
and their environment asynchronously, maintain their from the lofty to the mundane, encompassing things
own state, and are generally analogous to individual be- like accountability, awareness, privacy, and aesthetics –

Perspective: Marquez
11
anything a user considers important. While VSD uses a ogy to be developed will be situated. In keeping with the
broader meaning of value than that used in economics, iterative and integrative nature of VSD, empirical inves-
it is important to rank values so that conflicts can be re- tigations will refine and add to the conceptualizations
solved when competing values suggest different design specified during conceptual investigations.
choices.
Because empirical investigations involve the observa-
VSD employs an iterative, interleaved tripartite meth- tion and analysis of human activity, a broad range of
odology that includes conceptual, empirical, and tech- techniques from the social sciences may be applied. Of
nical investigations. In the following sections we de- all the aspects of VSD, empirical investigation is per-
scribe each of these types of investigations. haps the most foreign to the typical technically focused
programming language designer. However, as computa-
Conceptual Investigations
tional tools and methods reach deeper into realms not
We think of conceptual investigations as analogous to
previously considered, we believe empirical investiga-
domain modeling. In conceptual investigations we spec-
tions are crucial to making these new applications suc-
ify the components of particular values so that they can
cessful.
be analyzed precisely. We specify what a value means in
terms useful to a programming language designer. Con- Technical Investigations
ceptual investigations may be done before significant Technical investigations interleave with conceptual and
interaction with the target audience takes place. As is empirical investigations in two important ways. First,
characteristic of VSD, however, conceptualizations are technical investigations discover the ways in which us-
revisited and augmented as the design process proceeds ers’ existing technology supports or hinders the values
in an iterative and integrative fashion. of the users. While these investigations are similar to
empirical investigations, they are focused on technolog-
An important additional part of conceptual investiga-
ical artifacts rather than humans. The second important
tion is stakeholder identification. Direct stakehold-
mode of technical investigations is proactive in nature:
ers are straightforward – they are the people who will
determining how systems may be designed to support
be writing code in your language using the tools you
the values identified in conceptual investigations.
provide. However, it is important to consider indirect
stakeholders as well. For example, working programs
may need to be delivered by your direct stakeholders Applying vsd to programming
to third parties – these third parties constitute indirect language design
stakeholders. The characteristics of indirect stakehold- In this section we discuss the ways in which we are ap-
ers will implicate values that must be supported in the plying VSD to the design of a programming language.
design of your language. If the indirect stakeholders are First we discuss the language itself and the target audi-
technically unsophisticated, for example, then the lan- ence of our language
guage must support the delivery of code that is easy to
install and run. AFABL: A Friendly Adaptive Behavior Language
Empirical Investigations AFABL (which is the evolution of Adaptive Behavior
Empirical investigations include direct observations of Language) integrates reinforcement learning into the
the human users in the context in which the technol- programming language itself to enable a paradigm that

Spring 2010: The Tower


12
we call partial programming (Simpkins et al. 2008). In rently using agent modeling in their work. The first
partial programming, part of the behavior of an agent is group, the OrgSim group (Bodner and Rouse 2009), is
left unspecified, to be adapted at run-time. Reinforce- a team of industrial engineers who are studying organi-
ment learning is an area of machine-learning focused on zational decision-making using agent-based simulations
the technique of having an agent perform actions in its as well as other more traditional forms of simulations.
environment which optimize (usually maximize) some The OrgSim group wants to model the human elements
concept of a reward. Using the reinforcement learning of organization in order to create richer, more realistic
model, the programmer defines elements of behaviors models that can account for human biases, personality,
– states, actions, and rewards – and leaves the language’s and other factors that can be simulated only coarsely, if
runtime system to handle the details of how particular at all, using traditional simulation techniques. The sec-
combinations of these elements determine the agent’s ond group is a team of computer game researchers cre-
behavior in a given state. AFABL allows an agent pro- ating autonomous intelligent agents that are characters
grammer to think at a higher level of abstraction, ignor- in interactive narratives (Riedl et al. 2008; Riedl and
ing details that are not relevant to defining an agent’s be- Stern 2006).
havior. When writing an agent in AFABL, the primary
Both of these groups are using the most advanced agent
task of the programmer is to define the actions that an
modeling language available to them: ABL (A Behavior
agent can take, define whatever conditions are known to
Language) (Mateas and Stern 2004). ABL was created
invoke certain behaviors, and define other behaviors as
in the course of a Computer Science PhD by a games
“adaptive,” that is, to be learned by the AFABL runtime
researcher to meet his needs in creating a first-of-its-
system.
kind game, an interactive drama. ABL was not designed
We are designing AFABL for social scientists and other with the help of or goal of assisting non-programming
agent modelers who are not programmers per se but who expert agent modelers. Naturally, both groups have met
employ programming as a basic part of their method- with difficulty in using ABL. By using VSD in working
ological toolkit. We also hope to encourage greater use with these groups we hope to meet their needs with
of agent modeling and simulation among practitioners AFABL.
who currently do not use agent modeling, and among
agent modelers who would write more complex models Conceptual Investigations of Agent Modelers
if given the tools to do so more easily. Since these kinds As described earlier, conceptual investigations yield
of users have very different backgrounds from program- working definitions of values that can be used in the de-
mers it is important to understand their needs and val- sign of technological artifacts – in our case the AFABL
ues in designing tools intended for their use. We believe programming language. In our conceptual investiga-
that VSD will be one methodological tool among per- tions thus far we have identified several values, whose
haps many that will help us understand our target audi- conceptualizations as we currently understand them are
ence and truly incorporate their input into the design listed below.
process. In the next section we discuss how this process • Simplicity. Here simplicity has two essential features.
is taking place in the design of our language. First, AFABL must be consistent in its design, both in-
Using VSD in the Design of AFABL ternally and in the extent to which it exploits the users’
We are working with two different groups who are cur- current knowledge of programming. Internal consis-

Perspective: Marquez
13
tency means that when users encounter a new language Empirical Investigations of Agent Modelers
construct for the first time, they should be able to apply Solving a problem requires an understanding of the
their knowledge of analogous constructs they already problem. The problem in our case is the experience of
know. External consistency means that AFABL should agent modelers in using the computational tools at their
use programming constructs that users already know disposal. To understand the problems agent modelers
form other languages and require users to learn as few face and their desiderata for computational tools, we are
new language constructs as possible. joining their teams and using their existing tools along-
• Power. A language is sufficiently powerful if it allows side them. In doing so we hope to gain an appreciation
its programmers to reasonably and easily write all the for the goals of their work, the expertise they bring to
programs they want to write in the language. If a lan- the task, and the difficulties they have in using existing
guage makes it hard to write certain types of programs, tools to accomplish their goals. We hope to gain a level
then those programs will usually not be written, thus of empathy that will help us develop a language and
limiting the scope of use of the language. Naturally, tools that will meet their needs very well.
power trades off with simplicity, but simplicity at the Technical Investigations of Agent Modelers
expense of essential power is unacceptable to our target What do they already use? How do their existing
audience. In the design implications section below we tools support or hinder their values? What technol-
discuss strategies for dealing with the power versus sim- ogy choices do we have at our disposal to support their
plicity issue. values? These are the kinds of questions we address in
• Participation. Our user communities are eager to con- technical investigations. In our case, there is a rich tap-
tribute to the design of AFABL and to its documenta- estry of software tools already in use by our users. These
tion and development of best practices. We welcome tools include virtual worlds — simulation platforms
this participation and believe that it will positively im- and game engines — and editing tools for the programs
pact adoption, both with the users with whom we are they currently write. Some of these tools are essential
already working and new users that will be influenced to their work and some may be replaced with tools we
by our early adopters. VSD directly supports and en- develop. One overriding value that stems from our us-
courages this participation in the design process. ers’ existing tool base is interoperability. Any language
or tool we develop must support interoperability with
• Growth. The language we develop and the theoreti- their essential tools.
cal models of intelligent and believable agents that we
employ today may not be the last words. It is important Implications of Values on Progamming
that AFABL be able to accommodate new models and Language Design
applications. We are already familiar with many values supported by
• Modeling Support. A modeling tool imposes a struc- the general purpose programming languages we use. C
ture on the way an agent modeler thinks about agents. supports values like efficiency and low-level access. Lisp
AFABL should do so in a helpful way, if possible, but supports values like extensibility and expressiveness. Py-
certainly not hinder particular ways of thinking about thon supports simplicity. In this section we discuss how
agents. some of the values we identified above may impact the
design of our language.

Spring 2010: The Tower


14
Interoperability It is essential that our language and and metaprogramming. We intend to employ the same
tools support interoperability with the virtual worlds strategies in the design of AFABL.
currently in use. In our technical investigations we have
• Feel free to make presumptions / Optimize for the
found that the simulation platforms and game engines
common case — A great majority of the time, mod-
in use support Java-based interfaces, and many of them
elers will be using similar methods and approaches.
run on the Java Virtual Machine ( JVM). Since these
There should be as little friction between the modeler’s
projects also use ABL, they have existing code bases that
thoughts and the compiler’s input as possible. Being
use the ABL run-time system and bridge libraries that
able to make sound prejudgments about the program-
enable communication between ABL agents and virtual
ming language’s users and the patterns of programming
worlds. These technical investigations lead us to the fol-
they exhibit is key to opening a whole class of optimiza-
lowing design decisions for AFABL:
tions and simplifications that can help both the user and
• AFABL will run on the JVM. Currently, we are plan- the compiler. In the context of AFABL, this means that
ning to implement AFABL as an embedded domain we very much need to evaluate our design at every step
specific language (EDSL) written in the Scala program- with our target userbase and should employ, e.g., VSD
ming language (Odersky et al. 2008). This will allow us in doing so.
to interoperate well with Java programs and ABL while
• Do not assume anything / Keep uncommon and un-
providing advanced language features in the design and
foreseen cases possible — Only close off the language
implementation of our language.
where it would create great disparity of future imple-
• AFABL will use the ABL run-time system. While we mentation or for necessary optimizations. In the latter
have decided to depart from the syntax and language case (should the common case be in use) the alternate,
implementation of ABL, the agent model and run-time optimized, but less extensible implementation can be
system of ABL represents a tremendous amount of valu- used. One should not outright assume anything about
able work that we wish to build on, not reinvent. Ad- the user (because this would restrict future ways in
ditionally, using the ABL run-time system will allow us which the language could be used), and should take
to make use of the existing bridges between ABL agents care to properly document and account for any pre-
and virtual worlds. sumptions. We must be sure to focus AFABL not too
much towards our VSD-driven presumptions, lest we
Simplicity and Power Simplicity and power often op- unintentionally restrict the ease of use of the language
pose each other when taking design decisions, so we for other types of modelers and users.
discuss these values here together. We hope to maximize
both power and simplicity with the following language
Participation Our users have expressed strong inter-
features:
est in contributing to the design, documentation, and
• Provide a simple consistent set of primitives and syn- practices of AFABL. To accommodate our users’ desire
tax while providing expressive power through first class for participation we anticipate the following features of
functions, closures, objects, and modules. Languages like AFABL:
Ruby and Python can be used by programming novices
• Iterative language development. By designing AFBL
as well as expert programmers who use advanced expres-
around a small set of primitives, we hope to get the lan-
sive features such as iterators, comprehensions, closures,

Perspective: Marquez
15
guage into the hands of users early in its development. ing, we have singled out Value Sensitive Design and
That way users can experiment with the language and described how it can be used in the design of program-
provide feedback throughout its development. Put an- ming language and tools for a non-traditional popula-
other way, AFABL will be developed with agile software tion of programmers, in our case agent modelers like
development practices. social scientists and game designers.
• User-accessible documentation system. Many lan-
guages already provide programmers with the means Acknowledgements
to automatically generate documentation from source I wish to thank David Roberts for suggesting the use
code. Many language communities also provide user- of Value Sensitive Design, and Doug Bodner and Mark
accessible documentation systems, such as Wikis and Riedl for allowing us to participate in their projects and
web forums, whereby users can share their knowledge their help in designing AFABL.
and contribute directly to the documentation base of
the language. We will employ similar mechanisms for
AFABL.

Growth New theories of agent modeling and new vir-


tual worlds will be created in the future. To accommo-
date these changes, we will design AFABL for growth in
the following two ways:
• Support new run-time systems. The ABL run-time sys-
tem represents a particular way of modeling behavioral
agents. It may be possible to support new agent theories
by connecting AFABL with new run-time systems.
• Support the full range of operating system and JVM
intercommunication. By providing a full set of inter-
communication mechanisms, such as pipes, sockets,
file system access, and JVM interoperability, AFABL
should be able to accommodate new virtual world en-
vironments.

Conclusion
In this paper we have taken the position that design
methodologies from the HCI research community can
be of great benefit in the development of programming
languages. Among the design processes we are employ-

Spring 2010: The Tower


16
References Riedl, M.O., Stern, A., Dini, D., and Alderman, J. Dy-
Arnold, K. Programmers are people, too. Queue, namic experience management in virtual worlds for en-
3(5):54–59, 2005. tertainment, education, and training. In International
Transactions on Systems Science and Applications, Spe-
Bodner, D.A. and Rouse, W.B. Handbook of Systems cial Issue on Agent Based Systems for Human Learning,
Engineering and Management, chapter Organizational volume 4(2), 2008.
Simulation. Wiley, 2009.
Simpkins, C., Bhat, S., and Isbell, C.L., Jr. Towards adap-
Friedman, B., Jr., Kahn, P.H., and Borning, A. Human- tive programming: Integrating reinforcement learning
Computer Interaction in Management Information into a programming language. In OOPSLA ’08: ACM
Systems: Foundations, chapter 16. M.E. Sharpe, Inc, SIGPLAN Conference on Object-Oriented Program-
NY, 2006. ming, Systems, Languages, and Applications, Onward!
Track, Nashville, TN USA, October 2008.
Le Dantec, C.A., Poole, E.S., and Wyche, S.P. Values as
lived experience: Evolving value sensitive design in
support of value discovery. In Proceedings of the ACM
Conference on Human Factors in Computing Systems
(CHI 2009), Boston, MA, USA, April 2009.

Mateas, M. and Stern, A. Life-like Characters. Tools,


Affective Functions and Applications, chapter A Be-
havior Language: Joint Action and Behavioral Idioms.
Springer, 2004.

Odersky, M., Spoon, L., and Venners, B.. Programming


in Scala. Artima, 1 edition, 2008.

Pane, J.F., Myers, B.A., and Miller, L.B.. Using hci tech-
niques to design a more usable programming system.
In Symposium on Empirical Studies of Programmers
(ESP02), Proceedings of 2002 IEEE Symposia on Hu-
man Centric Computing Languages and Environments
(HCC 2002), Arlington, VA, September 2002.

Riedl, M.O. and Stern, A. Believable agents and intel-


ligent scenario direction for social and cultural leader-
ship training. In Proceedings of the 15th Conference on
Behavior Representation in Modeling and Simulation,
Baltimore, Maryland, 2006.

Perspective: Marquez
Synthetic Biology: Approaches and
Applications of Engineered Biology
ROBERT LOUIS FEE
School of Chemistry and Biochemistry
17
Georgia Institute of Technology

Synthetic biology is expected to change how we understand and engineer biological


systems. Lying at the intersection of molecular biology, physics, and engineering, the
applications of this exploding field will both draw from and add to many existing disci-
plines. In this perspective, the recent advances in synthetic biology towards the design of
complex, artificial biological systems are discussed.

Advisor:
Friedrich C. Simmel
School of Physics
Technical University Munich
Spring 2010: The Tower
18
the rise of synthetic biology synthetic bioorganisms, it is similarly driving scientists
Several remarkable hurdles in the life sciences have been towards a deeper level of understanding of biology.
cleared during the last half of the 20th century, from
the discovery of the structure of DNA in 1959, to the Applications of engineered
deciphering of the genetic code, the development of re- organisms
combinant DNA techniques, and the mapping of the It is expected that advances in synthetic biology will
human genome. Scientists have routinely tinkered with create important advances in applications too diverse
genes for the last 30 years, even inserting a cold-water and numerous to imagine. Applications of bioengi-
fish gene into wheat to improve weather resistance; neered microorganisms include detecting toxins in
thus, synthetic biology is by no means a new science. air and water, breaking down pollutants and danger-
Synthetic biology is a means to harness the biosynthetic ous chemicals, producing pharmaceuticals, repairing
machinery of organisms on the level of an entire ge- defective genes, targeting tumors, and more. In 2008,
nome to make organisms do things in ways nature has genomics pioneer Dr. Craig Venter secured a $600 bil-
never done before. lion grant from ExxonMobil to develop hydrocarbon-
Synthetic biology, despite its long history, is still in the producing microorganisms as an alternative to crude oil
early stages of development. The first international con- (Borrell 2009).
ference devoted to the field was held at M.I.T in June Scientists are engineering microbes to perform complex
2004. The leaders sought to bring together “researchers multi-step syntheses of natural products. Jay Keasling, a
who are working to design and build biological parts, professor at the University of California, Berkeley, re-
devices, and integrated biological systems; develop cently demonstrated genetically engineered yeast cells
technologies that enable such work; and place this sci- (Saccharomyces cerevisiae) that manufacture the imme-
entific and engineering research within its current and diate precursor of artemisinin, a malarial drug widely
future social context” (Synthetic Biology 101, 2004). used in developing countries (Ro et al, 2006). Before,
The field is growing quickly, as evidenced by the rapidly this compound was chemically extracted from the sweet
increasing number of genetic discoveries, the exploding wormwood herb. Since the extraction is expensive and
number of research teams exploring the field, and the the wormwood herb is prone to drought, the availabil-
funding from government and industrial sources. ity of the drug is reduced in poorer countries. Once the
Akin to the descriptive-to-synthetic transformation of engineered yeast cells were fine-tuned to produce high
chemistry in the 1900s, biological synthesis forces scien- amounts of the artemisinin precursor, the compound
tists to pursue a “man-on-the-moon” goal that demands was made quickly and cheaply. This same method could
they discard erroneous theories and compels scientists be applied to the mass-production of other drugs cur-
to solve problems not encountered in observation. Data rently limited by natural sources, such as anti-HIV drug
contradicting a theory can sometimes be excluded for prostratin and anti-cancer drug taxol (Tucker & Zilin-
the sake of argument, but doing the same while build- skas, 2006).
ing a lunar shuttle would be disastrous. Synthetic biol- The most far-sighted effort in synthetic biology is the
ogy comes at an important time; by creating analogous drive towards standardized biological parts and circuits.
“man-on-the-moon” engineering goals in the form of Just as other engineering disciplines rely on parts that are
well-described and universally used — like transistors

Perspective: FEE
19
and resistors — biology needs a tool box of standard- Despite the enormous progress seen in the last five years
ized genetic parts with characterized performance. The and some highly publicized and heavily funded feats,
Registry of Standard Biological Parts comprises many the systematic and widespread design of biological sys-
short pieces of DNA that encode multiple functional tems remains a formidable task.
genetic elements called “BioBricks” (Registry ofStan-
dard Biological Parts). In 2008, the Registry contained current challenges
over 2000 basic parts comprised of sensors, input/out-
put devices, regulatory operators, and composite parts Standardization
of varying complexity (Greenwald, 2005). The M.I.T. Standards underlie engineering disciplines: measure-
group made the registry free and public (http://parts. ments, gasoline formulation, machining parts, and so
mit.edu/) and has invited researchers to contribute to on. Certain biotechnology standards have taken hold
the growing library. in cases such as protein crystallography and enzyme
nomenclature, but engineered biology lacks a univer-
Some genetic parts code for a promoter gene that begins sal standard for most classes of functions and system
the transcription of DNA into mRNA, a repressor that characterization. One research group’s genetic toggle
codes a protein that blocks the transcription of another switch may work in a certain strain of Escherichia coli
gene, a reporter gene that encodes a readout signal, a in a certain type of broth, while another’s oscillatory
terminator sequence that halts RNA transcription, and function may work in a different strain when cells are
a ribosome binding site that begins protein synthesis. grown in supplemented minimal media (Endy, 2005).
The goal is to develop a discipline-wide standard and It is unclear whether the two biological functions can
source for creating, testing, and combining BioBricks be combined despite the different operating parameters.
into increasingly complicated functions while reducing The Registry of Standard Biological Parts and new Bio-
unintended interactions. fab facilities have recently emerged to begin addressing
To date, BioBricks have been assembled into a few sim- this issue, and a growing consensus is emerging on the
ple genetic circuits (McMillen & Collins, 2004). One best way to reliably build and describe the function of
creates a film of bacteria that is sensitive to light so it can new genetic components.
capture images (Levskaya et al). Another operates as a
type of battery, producing a weak electric current. Bio- Abstraction
Bricks have been combined into logic gate devices that Drawing again from other engineering disciplines, and
execute Boolean operations, such as AND, NOT, OR, specifically from the semiconductor industry, synthetic
NAND, and NOR. An AND operator creates an out- biology must manage the enormous complexity of natu-
put signal when it gets a biochemical signal from both ral biological systems by abstraction hierarchies. After
inputs; an OR operator generates an output if it gets a all, writing “code” with DNA letters is comparable to
signal from either input; and a NOT operator changes a creating operating systems by inputing 1’s and 0’s. Lev-
weak signal into a strong one, and vice versa. This would els could be defined as DNA (genetic material), Parts
allow cells to be small programmable machines whose (basic functions, such as a terminating sequence for an
operations can be controlled through light or various action), Devices (combinations of parts), and Systems
chemical signals (Atkinson et al, 2003). (combinations of devices). Scientists should be able
to work independently at each hierarchy level, so that

Spring 2010: The Tower


20

Figure 1. The Registry of Standard Biological Parts. This registry offers free access to basic biological functions that are used to
create new biological systems. Pictured is a standard data sheet on a gene regulating transcription, with normal performance and
compatibility measurements, plus an extra biological concern: system performance during evolution and cell reproduction. The
registry is part of a conscious effort to standardize gene parts in the hopes of creating interchangeable components with well-
characterized functions when implanted in cells. The project is open source; anybody can freely use and add information to the
Registry.

Perspective: FEE
21
device-level workers would not need to know anything designed to be metabolically deficient so they cannot
about phosphoramidite chemistry, or genetic oscilla- survive in the wild. Still, some have suggested that an
tors, etc. (Canton, 2005). incomplete understanding and emergent properties
arising from unforeseen interactions between new genes
Engineered Simplicity and Evolution could be problematic. Such dangers have given rise to
The rapid progress made by mechanical engineering in fears of a dystopian takeover by super-rugged plants that
this century was made possible by creating easily under- overwhelm local ecosystems.
standable machines. Engineered simplicity is helpful
not only for repairs but for future upgrades and rede-
signs. While a modern automobile may seem complex, Bioterrorism
the level of complexity pales in comparison to a living Research in synthetic biology may generate “dual-use”
cell, which has far more interconnected pathways and findings that could enable bioterrorists to develop new
interactions. Cells evolved in response to a multitude of biological warfare tools that are easier to obtain and far
evolutionary pressures and mechanisms were developed more lethal than today’s military bioweapons. The most
to be efficient, not necessarily easy to understand (Alon, commonly cited example of this is the resurrection syn-
2003). A related problem is that other engineered sys- thesis of the 1918 pandemic influenza strain by CDC
tems don’t evolve. Organisms such as E. coli reproduce researchers (Tumpey et al, 2005) and the possibility of
and have genetic mutations within hours. While this of- recreating smallpox from easily-ordered DNA (Venter,
fers possibilities to the biological engineer (for instance, 2005). There has been a growing consensus that not all
human-directed evolution for fine-tuning organism be- sequences should be made publicly available, but the
havior), it also increases the complexity of designing and fact remains that such powerful recombinant DNA
predicting the function of these new genetic systems technologies could be used for harm.
(Hasteltine, 2007).
Attempts to limit access to the DNA synthesis tech-
risks associated with biological nology would be counterproductive, and a sensible ap-
engineering proach might include some selective regulation while
allowing research to continue. Now, as SARS, bird
Accidental Release influenza, and other infectious disease emerge, these
Researchers first raised concerns at the Asilomar Con- recombinant DNA techniques enhance our ability to
ference in California during the summer of 1975 and manage this threat today compared to what was possible
concluded that current genetic experiments carried just 30 years ago. The revolution in synthetic biology is
minimal risk. The past 30 years of experience in genet- nothing less than a push in all fronts of biology, whether
ically-manipulated crops demonstrated that engineered that impacts environmental cleanup, chemical synthesis
organisms are less fit than their wild counterparts, and using bacteria, or human health.
they either die or eject their new genes without con-
stant assistance from humans. However, researchers conclusion
concluded that the abilities to replicate and evolve re- At present, synthetic biology’s myriad implications can
quired special precautions. It was recommended that all be glimpsed only dimly. The field clearly has the poten-
researchers work with bacterial strains that are specially tial to bring about epochal changes in medicine, agri-

Spring 2010: The Tower


22
culture, industry, and politics. Some critics consider the building new biosystems with rapidly increasing com-
idea of creating artificial organisms in the laboratory to plexity in versatility and applications. These tools for
be an example of scientific hubris, evocative of Faust or engineering biology are being developed and distribut-
Dr. Frankenstein. However, the move from understand- ed, and a societal framework is needed to help not only
ing biology to designing it for our requirements has al- create a global community that celebrates biology but
ways been a part of the biological enterprise and used to also to lead the enormously constructive invention of
produce chemicals and biopharmaceuticals. Synthetic biological technologies.
biology represents an ambitious new paradigm for

Figure 2. Abstraction Hierarchy. Abstraction levels are important for managing complexity and are used extensively in engineering
disciplines. As biological parts and functions become increasingly complex, writing ‘code’ with individual nucleotides is rapidly
becoming more difficult. Currently, researchers spend considerable time learning the intricacies of every step of the process, and
stratification would allow for specialization and faster development. Ideally, individuals can work on individual levels, one can
focus on part design without worrying about how genetic oscillators work, while others could string together parts to construct
whole systems for possible biosensor applications. Image originally made by Drew Endy.

Perspective: FEE
23
References Atkinson, Mariette R., Savageau, Michael A., Myers,
(2004). Paper presented at the Synthetic Biology 1.0: Jesse T., & Ninfa, Alexander J. (2003). Development of
The First International Meeting on Synthetic Biology, Genetic Circuitry Exhibiting Toggle Switch or Oscil-
Massachusetts Institute of Technology latory Behavior in Escherichia coli. Cell, 113(5), 597-
607.
Borrell, B. (2009, July 14). Clean dreams or pond scum?
exxonmobil and craig venter team up in quest for algae- Endy, Drew. (2005). Foundations for engineering biol-
based biofuels. Scientific American, Retrieved from ogy. [10.1038/nature04342]. Nature, 438(7067), 449-
http://www.scientificamerican.com/blog/60-second- 453.
science/post.cfm?id=clean-dreams-or-pond-scum-exx-
onmobi-2009-07-14 Canton, B. (2005). Engineering the Interface Between
Cellular Chassis and Integrated Biological Systems.
Ro, Dae-Kyun, Paradise, Eric M., Ouellet, Mario, Fish- Ph.D., Massachusetts Institute of Technology.
er, Karl J., Newman, Karyn L., Ndungu, John M., . . .
Keasling, Jay D. (2006). Production of the antimalarial Alon, U. (2003). Biological Networks: The Tinkerer
drug precursor artemisinic acid in engineered yeast. as an Engineer. Science, 301(5641), 1866-1867. doi:
[10.1038/nature04640]. Nature, 440(7086), 940-943. 10.1126/science.1089072

Tucker, J.B., & Zilinskas, R.A. (2006, Spring). The Haseltine, Eric L., & Arnold, Frances H. (2007). Syn-
Promise and perils of synthetic biology. The New At- thetic Gene Circuits: Design with Directed Evolu-
lantis, 12, 25-45. tion. Annual Review of Biophysics and Biomolecular
Structure, 36(1), 1-19. doi: doi:10.1146/annurev.bio-
Registry of standard biological parts. Retrieved from phys.36.040306.132600
http://parts.mit.edu/
Tumpey, Terrence M., Basler, Christopher F., Aguilar,
Morton, O. (2005, January). How a Biobrick works. Patricia V., Zeng, Hui, Solorzano, Alicia, Swayne, David
Wired, 13(01), Retrieved from http://www.wired. E., . . . Garcia-Sastre, Adolfo. (2005). Characterization
com/wired/archive/13.01/mit.html?pg=5 of the Reconstructed 1918 Spanish Influenza Pandemic
Virus. Science, 310(5745), 77-80. doi: 10.1126/sci-
Hasty, Jeff, McMillen, David, & Collins, J. J. (2002). ence.1119392
Engineered gene circuits. [10.1038/nature01257]. Na-
ture, 420(6912), 224-230. Venter, J. C. . (2005). Gene Synthesis Technology. Paper
presented at the State of the Science National Science
Levskaya, Anselm, Chevalier, Aaron A., Tabor, Jeffrey J., Advisory Board on Biosecurity. http://www.webcon-
Simpson, Zachary Booth, Lavery, Laura A., Levy, Mat- ferences.com/nihnsabb/july_1_2005.htmll
thew, . . . Voigt, Christopher A. (2005). Synthetic biolo-
gy: Engineering Escherichia coli to see light. [10.1038/
nature04405]. Nature, 438(7067), 441-442.

Spring 2010: The Tower


24


Network Forensics Analysis Using
Piecewise Polynomials
Sean Marcus Sanders
School of Electrical and Computer Engineering
25
Georgia Institute of Technology

The information transferred over computer networks is vulnerable to attackers. Network


forensics deals with the capture, recording, and analysis of network events to determine
the source of security attacks and other network-related problems. Electronic devices
send communications across networks by sending network data in the form of packets.
Networks are typically represented using discrete statistical models. Discrete statisti-
cal models are computationally expensive and utilize a significant amount of memory.
A continuous piecewise polynomial model is proposed to address the shortcomings of
discrete models and to further aid forensic investigators. Piecewise polynomial approxi-
mations are beneficial because sophisticated statistics are easier to perform on smooth
continuous data , rather than on unpredictable discrete data. Polynomials, moreover,
utilize roughly six times less memory than a collection of individual data points, mak-
ing this approach storage-friendly. A variety of networks have been modeled, and it is
possible to distinguish network traffic using a piecewise polynomial approach.
These preliminary results show that representing network traffic as piecewise polynomi-
als can be applied to the area of network forensics for the purpose of intrusion analysis.
This type of analysis will consist of not only identifying an attack, but also discovering
details about the attacks and other suspicious network activity by comparing and dis-
tinguishing archived piecewise polynomials.

Advisor:
Henry L. Owen
School of Electrical and Computer Engineering
Georgia Institute of Technology
Spring 2010: The Tower
26
Introduction abnormal activity, which does not necessarily imply ma-
licious traffic. Anomaly detection is more difficult to
Problem implement compared to signature detection because it
Network forensics deals with the capture, recording, must flag traffic as abnormal and discern the intent of
and analysis of network events to determine the source the traffic. Abnormal traffic does not necessarily imply
of security attacks and other network-related problems malicious traffic.
(Corey, 2002). One must differentiate malicious traffic
from normal traffic based on the patterns in the data Electronic devices such as notebooks and cellular
transfers. Network communication is ubiquitous, and phones communicate by transferring data across the In-
the information transferred over these networks is vul- ternet using packets. A packet is an information block
nerable to attackers who may corrupt systems, steal valu- that the Internet uses to transfer data. In most cases,
able information, and alter content. Network forensics the data being transferred across the Internet must be
is a critical area of research because , in the digital age, divided into hundreds, even thousands of packets to
information security is vital. With sensitive information be completely transferred. Similar to letters in a postal
such as social security numbers, credit card information, system, packets have parameters for delivery such as a
and government records stored on a network, the po- source address and destination address. Packets include
tential threat of identity theft, credit fraud, and national other parameters such as the amount of data being sent
security breaches increases. During July of 2009, North in a packet and a checking parameter to ensure that the
Korea was the main suspect behind a campaign of cyber data sent was not corrupted. The Internet is modeled as
attacks that paralyzed the websites of US and South Ko- a discrete collection of individual data points because
rean government agencies, banks and businesses (Parry, the Internet uses individual packets to transfer data.
2009). As many as 10 million Americans a year are vic- Discrete processes are difficult to model and analyze as
tims of identity theft, and it takes anywhere from 3 to opposed to continuous processes because there is not a
5,840 hours to repair damage done by this crime (Sor- definite link between two similar events. For example,
kin, 2009). In order to effectively prosecute network at- the concept of a derivative in calculus can only give a
tackers, investigators must first identify the attack, and logical result if the data is continuous. In many cases,
then gather evidence on the attack. experimental results are given as discrete values. Scien-
tists, engineers, and mathematicians sometimes use the
The process of identifying an attack on a network is least squares approximation to give a continuous model
known as intrusion detection. The two most popu- of the data given. Continuous models that represent
lar methods of intrusion detection are signature and discrete data are often preferred because they can be
anomaly detection (Mahoney, 2008). Signature detec- used for different types of analysis such as interpolation
tion is a technique that compares an archive of known and extrapolation.
attacks on a network with current network traffic to
discern whether or not there is malicious traffic. This Many forensic investigators use graphs and statistical
technique is reliable on known attacks but has a great methods, such as clustering, to model network traf-
disadvantage on novel attacks. Although this disadvan- fic (Thonnard, 2008). These graphs and statistics help
tage exists, signature detection is well understood and classify complex networks into patterns. These patterns
widely applied. Anomaly detection, on the other hand, are typically stored and represented in a discrete fash-
is a technique that identifies network attacks through ion because networks transfer data in a discrete manner.

article: Sanders
27
These patterns are used in combination with signature a forensic investigation. Ilow et al. (2000) and Wang
and anomaly detection techniques to identify network et al. (2007) both used modeling techniques to try to
attacks (Shah, 2006). In many cases these network predict network traffic. Wang et al. took a polynomial
patterns are archived and kept for extended periods approach that utilized Newton’s Forward Interpolation
of time. This storage of packets is needed to compare method to predict and model the behavior of network
past network traffic with current network traffic, in or- traffic. This technique used interpolation polynomials
der to effectively classify network events. Despite this of arbitrary order to approximate the dynamic behav-
necessity, the storage of packet captures is not desired ior of previous network traffic. Wang et al.’s technique
because packet captures use a significant amount of is useful for modeling general network behavior, but
memory storage, a limited and costly resource. After a using the polynomial approach for intrusion analysis is
variable amount of time, the archived network data is another issue. Wang et al.’s technique proved that gen-
deleted to free memory for future network patterns to eral network behavior can be predicted and modeled us-
be archived (Haugdahl, 2007). Detailed records of net- ing polynomials, but did not prove whether individual
work patterns can be stored for longer periods of time network events can be distinguished and categorized
by increasing the amount of free memory or decreasing through the use of polynomials.
the amount of archived traffic.
A continuous polynomial representation of a network Proposed Solution
is preferred to a discrete representation because discrete Network data is discrete, scattered, and difficult to ap-
representations are limited by the types of analysis and proximate; however, approximation and modeling tech-
statistics that can be performed. Polynomial approxima- niques are necessary to define networks and to perform
tions of data have limitations as well, such as failing to important statistics on the network data. Such statistics
represent exact behavior, which can be vital depending include the average amount of data each packet carries,
on the system being modeled. In order to effectively dif- the average rate packets arrive to a computer, and how
ferentiate traffic, a continuous polynomial approxima- many packets are lost before delivery. These values are
tion must be robust enough to reveal enough details used to adequately classify network traffic as normal or
about network traffic. Polynomial representations of malicious. When a system is approximated as a polyno-
data should require less memory storage than discrete mial, it is faster to perform basic mathematical opera-
representations. For instance, the polynomial, y=x2, tions and statistics such as derivatives, integrals, standard
could represent a million data points but take up little deviation, and variance. The ease of the computation of
memory. This observation is important because, in the a parameter allows for a more efficient analysis of the
area of network forensics, memory storage space is a data. Networks send an enormous amount of data each
critical factor. day, and precious time is required to process this data.
While the polynomial approximation is fairly accurate,
forming a long, a complex approximated polynomial is
Related Work
not practical for the purposes of network forensics since
Shah et al. (2008) applied dynamic modeling techniques
a network will seldom have identical behavior in each
to detect intrusions using anomaly detection. This par-
session. Assuming each of the five segments of points
ticular form of modeling was only used for identifying
shown in Figure 1 represents network events (i.e., web
intrusions and not for analyzing them or conducting
sites visited), investigators can approximate and classify

Spring 2010: The Tower


28
the network activity. The network traffic modeled in
all plots in this paper represents the same parameters.
The x-axis represents the packet capture time, where
the unit of time is not represented in seconds (i.e., real
time) but rather a time relative to the order the pack-
ets were captured. In other words, time represented in
the context of this paper does not represent real time,
but serves as a parameter for the data being modeled,
approximated packet data length. This parameter is re-
ferred to as time because the data being modeled is time
dependent. The y-axis represents the data length of the
packet captured in bytes. Throughout this paper the
terms packet, capture time, and time will be used inter-
changeably. In reality, different network events require
different amounts of time and numbers of packets than
others. For simplicity, all network events plotted in this
paper are scaled so that each network event is modeled
by equal time intervals. Figure 1. Plot of random discrete data.
If these segments were in a different order (i.e., the
same web sites were visited in a different order), then
the single polynomial in Figure 2 would not be able to
compensate for these changes and would be unable to
efficiently classify similar network traffic. Essentially, if
this single polynomial method were applied, one would
need 120 (5!) different polynomials to represent visit-
ing five different websites in every possible order. To
counter this issue, the idea of approximating network
traffic by using a piecewise polynomial is proposed. A
single polynomial defines one function to represent
data for all units of time, while a piecewise polynomial
defines separate functions at distinct time intervals and
connects these respective pieces to form a single contin-
uous data representation. The property of a piecewise
polynomial is important in modeling network traffic as
opposed to a single polynomial because many different
types of network events can occur. A piecewise poly-
Figure 2. Single polynomial approximation of data represent-
nomial can isolate and model the behavior of a single ed in Figure 1.
network event, while a single polynomial is limited

Article: Sanders
29
identify which network events behaved in a certain way.
A piecewise polynomial model will address this issue by
modeling each network event as an individual polyno-
mial. If the order of the network events (segments) were
changed, the individual polynomials would just occur
at different time intervals, but each segment will remain
the same. In other words, in a piecewise polynomial ap-
proximation each segment is represented by a distinct
polynomial.
The basic concept is that while the network will not
behave the same all the time, it will behave the same in
certain pieces. If network traffic can be quantified using
piecewise polynomials, investigators can apply signature
and anomaly detection techniques to identify and in-
vestigate events from a forensics perspective. Piecewise
Figure 3. Piecewise polynomial plot of data represented in polynomial approximations will be effective because
Figure 1. they should approximate the behavior pattern of a net-
work with enough resolution to differentiate network
to modeling clusters of events. The modeling of event
traffic.
clusters is not desired because it will increase the diffi-
culty in differentiating network traffic based on a single The primary goal is to test whether or not a piecewise
event. Such a scenario will result in a malicious event be- polynomial approach can approximate network data
ing clustered with a normal event, which could lead to with enough precision to distinguish network traffic. If
failure in identifying an attack. A piecewise polynomial there are no distinct differences in piecewise polynomi-
approximation should effectively classify every network al approximated network traffic then this approach will
event that has transpired using a unique piecewise ap- not be valid for this application. Conversely, if a piece-
proximation. The piecewise polynomial approximation wise polynomial approximation can effectively differen-
of the data shown in Figure 1 is shown in Figure 3. tiate network traffic then it can be applied to intrusion
analysis, because intrusion analysis is primarily focused
It is clear that while both polynomial approximations
on classifying traffic. This application is beneficial be-
in Figure 2 and Figure 3 can model the data represent-
cause polynomial-represented data should occupy less
ed in Figure 1, the piecewise polynomial (Figure 3) is
memory storage than discrete data, and polynomial
more accurate and robust than a single polynomial. A
data have lessfewer limitations on the type of analysis
single polynomial should not be used to model more
that can be performed.
than one network event, because it will not be able to
represent the individual different network events that
it is composed of. This example is meant to emphasize Methodology
that if a sequence of 100 network events were defined Tools and Algorithms
using one single polynomial, it would be difficult to Wireshark was used to capture network traffic in packet

Spring 2010: The Tower


30
capture files. A packet capture is a collection of the net- (1)
work traffic that has made contact with a computer and
is stored in a packet capture file (.pcap file). Wireshark is
P = polyfit( X, b, N)
an effective tool for capturing and filtering network traf- y = polyval(P, X )
fic, but does not allow for a custom analysis of network
traffic. The Libpcap library, which is used by Wireshark,
was investigated in order to use the captured network Piecewise.m is a custom-developed script, written in
traffic as an input to a custom parsing algorithm. This MATLAB. Essentially, Piecewise.m uses Polyfit() and
algorithm opens .pcap files that were saved using Wire- Polyval() to create piecewise polynomials. This script
shark and extracts the source address, destination ad- was designed to use packet data lengths as the parameter
dress, packet data length, and packet protocol into a for- on the y-axis, and packet capture time as the parameter
mat that can be used for custom processing. After these on the x-axis.
aspects of the packet were extracted they were saved
in a .csv file (comma separated files) for processing in important decisions and causes
MATLAB. Although the parameters initially extracted for error
(source address, destination address, packet data length, An important parameter used to approximate the data
and packet protocol) are not sufficient to analyze and is the order of the polynomial. Typically, the higher the
detect all malicious activity, these parameters are a good order of the polynomial, the more accurate the approxi-
starting point for a proof of concept implementation mation; in an approximation of network behavior/pat-
and analysis of this approach. terns, though, modeling exact behavior is unnecessarily
MATLAB was chosen for its versatility, variety of func- complex whereas approximating behavior is more use-
tions, and computing speed in processing large vectors. ful. Thus, the orders of the piecewise polynomials are
MATLAB has two built-in functions called Polyfit() manually chosen based on the predicted complexity
and Polyval() that respectively compute polynomial co- of the network traffic. More complex traffic should be
efficients and evaluate polynomials by using input data. approximated with a higher order polynomial than less
In MATLAB, the input and output data of polyfit() and complex traffic. This is an assumption that will be used
polyval() are represented as vectors. Polyfit() uses the to designate the order of a polynomial given the type of
least squares approximation to approximate the coeffi- network being modeled. Network traffic was also mod-
cients of a best fit, Nth-order polynomial for the given eled using different orders to determine the effect(s) that
vectors of data: X and B. In statistics, the least squares changing orders have on the approximation of traffic.
approximation is used for estimating polynomial coef- When approximating polynomials, ensuring that there
ficients based on discrete parameters. Polyval() can best are enough data points to create a reliable approxima-
be viewed as a support function for Polyfit(), which tion is important. For example, if there is one data point
gives the approximated numerical values of the poly- a first order polynomial would give an inaccurate ap-
nomial approximated in Polyfit(),Y. A clearer example proximation, because at least two points are needed
of how Polyfit() and Polyval() are related is shown in to approximate a line. The general rule is that the accu-
equation 1. racy of the polynomial approximation depends directly
on the order of the polynomial, and number of data

Article: Sanders
31
points used to define the polynomial. The number of to the hardware limitations of physical machines, virtu-
data points must be at least one more than the desired al machines and physical machines do not execute com-
order to yield an accurate polynomial approximation. mands simultaneously. From a networking perspective
In most cases, the higher the order of the polynomial, the execution of commands is not a problem, because
the more accurate the approximation is. On the other once connected, networks utilize protocols to send and
hand, a polynomial of too high of an order may yield sometimes regulate the flow of network traffic. In other
unrealistic results. Thus finding a balance of polynomial words, the network does not know that there is a virtual
order that yield both of approximate and realistic results machine operating on a physical machine and thus sup-
is important. ports multiple simultaneous network connections.
Packet captures were performed using Wireshark on
Experiments the Macbook operating with three virtual machines on
Closed/Controlled Network Behovior the ethernet interface. A variety of packet captures were
The first step to determine whether a polynomial can made to compare and contrast network behavior using
accurately approximate and differentiate network be- web pages. If the resulting piecewise polynomials could
havior is to analyze the behavior of a closed/controlled effectively compare and contrast network traffic based
network. As opposed to open networks, closed net- on various behaviors, then the polynomial approxima-
works are not connected to the Internet. The designed tion will be considered a success. The descriptions of
closed network was composed of two Macbooks, with these packet capture files are listed below.
four virtual machines operating on the separate Mac- • Idleclosed.pcap— a .pcap file that captures the ran-
books. Figure 4 gives a visual representation of the de- dom noise that is captured when the network is idle.
signed closed network.
• Icmpclose.pcap— a .pcap file that is composed pri-
A virtual machine is a software implementation of a ma- marily of ping commands from one Macbook to the
chine that executes programs like a physical machine. other. Ping commands are used to test whether a par-
Virtual machines operate on a separate partition of a ticular computer is reachable across a network. This test
computer and utilize their own operating system. Due is performed by sending packets of the same length to a
computer, and waiting to receive a reply from that com-
puter.
• Httpclose.pcap— a .pcap file that includes a brief ping
command being sent from one Macbook to the other
Macbook, but is dominated by HTTP traffic (basic
website traffic). This file also includes a period of idle
behavior where the network is at rest.
• Packet Capture A— a .pcap file that contains the
network data for visiting a specific site hosted on one
Figure 4. Visual representation of designed closed network Macbook.
with virtual machines. VMS circled.

Spring 2010: The Tower


32
• Packet Capture B— a separate .pcap file that contains Internet.pcap was used to show the effect the order of
the network data for visiting the same site visited in a polynomial has on the approximation because it con-
Packet Capture A hosted on the same Macbook at a dif- tains the most complex network traffic. Packet Capture
ferent time. C and Packet Capture D were used to determine if dif-
ferent web sites exhibit distinguishable behavior by us-
Idleclosed.pcap and Icmpclose.pcap yield piecewise
ing by piecewise and single polynomials. These models
polynomials that model the behavior of idle and ping
will test the theory of the benefit of piecewise polyno-
traffic respectively. These piecewise polynomials should
mials over single polynomials similar to the example
identify both the idle and ping behavior found in Http-
inthe Proposed Solution. Fourth order piecewise and
close.pcap. The piecewise polynomials that model two
single polynomials are used for the open network analy-
separate .pcap files going to the same pages (i.e. Packet
sis, as opposed to second order, because it is assumed
Capture A and Packet Capture B) should resemble each
that open network events should be more complex than
other in behavior. A second order piecewise polyno-
closed networks.
mial is used for the closed network analysis because it
is assumed that closed network events should not be
extremely complex. Higher orders are avoided wherever Results
possible due to reasons explained in Important Deci- Closed Network Analysis
sions and Causes for Error. Ping Analysis In the closed network case, as defined in
the Closed/Controlled Network Behavior, Httpclose.
Open/Internet Network Behavior pcap and Icmpclose.pcap both contained the same type
While experimenting with a controlled network is use- of ping traffic going through the network but in differ-
ful, a network that is connected to the Internet will ent packet captures and different times. The resulting
behave differently from one that is not. To investigate piecewise polynomial that described this traffic in both
a more realistic scenario, one Macbook was utilized to packet captures was the constant 98. This constant value
make different packet captures under similar conditions of 98 represents that every packet captured had a packet
to those in Closed/Controlled Network Behavior, but data length of 98 bytes. A constant piecewise polyno-
with contact to the Internet. The details of the packet mial is an acceptable value because the ping command
capture files are listed below. constantly sends packets of identical lengths to a single
• Internet.pcap— a .pcap file that contains network data destination.
captured while actively browsing the Internet. Traffic Analysis Packet Capture A and Packet Capture
• Packet Capture C— a .pcap file that contains the net- B are two different .pcap files that were capturing the
work data for visiting a sequence of three web sites on same network activity at approximately the same time
the Internet in a particular order (google.com, gatech. interval, and are represented as second order piecewise
edu, and facebook.com). polynomials. According to Figure 5, the two packet
captures are represented in a very similar manner. This
• Packet Capture D— a separate .pcap file that con- result is interesting because, while the results are simi-
tains the network data for visiting the same web sites as lar, they are not exact. This mismatch is not damaging as
Packet Capture C but in a different order (gatech.edu, Figure 5 shows the relationship of the two data files. The
facebook.com, and google.com)

Article: Sanders
33
relationship of the first segments of data is that they are
constant around the same value, while the second seg-
ments of the data are both decreasing, concave down,
and share similar values.

Traffic Analysis of Open Networks


The similar packet capture files, Packet Capture C (the
upper plot) and Packet Capture D (the lower plot), were
plotted in Figure 6 using a fourth order single polyno-
mial . Figure 7 shows the plot of Packet Captures C and
D using a fourth order piecewise polynomial.
Packet Capture C visits google.com first, followed by
gatech.edu, and ends with facebook.com, while Packet
Capture D visits gatech.edu first, followed by facebook.
com, and ends with google.com. Figure 7 shows that
Figure 5. Second order relationship of similar packet capture
files.
each piecewise polynomial gives each website visited a
unique behavior that can be identified with visual in-
spection.
Google.com behaves in a sinusoidal type manner, Gat-
ech.edu is represented as concave down parabola, and
Facebook.com exhibits a strong linear behavior with a
small positive slope. Although, the three web sites visit-
ed can be clearly identified in Figure 7, it does not seem
to be the case in Figure 6.
In the single polynomial approximation the data looks
relatively similar , and it is difficult to discern which part
of the polynomial represents which website. This result
shows that different network events can be approxi-
mated and distinguished using a piecewise polynomial
approach, whereas a single polynomial approximation
is not sufficient to distinguish network events.

Significance of Order
Internet.pcap was plotted using zero, second, and fifth
orders to discern the effect order has in the approxima-
Figure 6. Single polynomial comparison plots of similar out
of order traffic. tion of a polynomial.
Figure 8 shows that the higher the order of the polyno-

Spring 2010: The Tower


34
mial, the more detail is shown about the network. De- traffic as polynomials instead of a collection of individ-
spite revealing more details of a network, Figure 8 does ual points saves memory.
not show which order of the polynomial yields better
results. Figure 8 is shown to illustrate the effect order Discussion of results
has on the approximation of network traffic. More de- The plots in Results are intended to show whether
tails are not necessarily better, because too many details piecewise polynomials can effectively differentiate and
may not yield an approximation that is robust enough link network traffic. The ping traffic analyzed in Ping
to identify similar future network traffic and is difficult Analysis was approximated by piecewise polynomials
to interpret. that exhibited constant behavior. Although this result
is desired, ping traffic is the simplest type of network
Memory Savings traffic and is not sufficient enough to prove the valid-
Internet.pcap was saved in two separate files. One file ity of a piecewise polynomial approach. Traffic analysis
was saved using Internet.pcap’s polynomial representa- of the closed network yielded similar results to the ping
tion, and a separate file was saved using Internet.pcap’s analysis, by successfully differentiating and linking net-
representation as a collection of individual data points work traffic. Although the closed network analysis was
(i.e., packets). The polynomial file was 12Kb large while a success, in reality most network traffic occurs on the
the collection of individual data point file was 72Kb Internet. Thus the open network results are of primary
large. This size difference indicates that saving network interest.

Figure 7. Piecewise polynomial comparison of similar out of


order traffic. Figure 8. Internet pcap plots of varying orders.

Article: Sanders
35
The open network single polynomial approximation Future work
was unable to differentiate and link network events, as Piecewise polynomials will be applied to the area of
shown in Figure 6. The plot given in Figure 6 shows two network forensics for intrusion analysis. This analysis
similar curves of different ordered network traffic. Al- will require collection of known data that are classified
though this result is not desired, it was expected that a as either malicious or normal. Also, more information
single polynomial approximation would not be able to about packets will have to be quantified, to further
classify out of order traffic effectively. Conversely, Figure classify and to distinguish network traffic because ap-
7 shows that a piecewise polynomial approximation was proximating packet length and protocols are not suf-
able to distinguish each section of the network traffic ficient to perform a thorough analysis. The malicious
that was captured. These results show that a piecewise data will be modeled as piecewise polynomials and used
polynomial approximation can be used to classify and for signature detection. The normal network traffic will
differentiate network traffic. also be modeled as piecewise polynomials and used for
anomaly detection.
Memory storage is also of primary concern when mod-
eling network data. The Internet packet capture shows Future research also includes identifying what certain
that the discrete representation of data utilized 72Kb of traffic patterns represent, such as web browsing traffic,
memory storage, while the polynomial representation video streaming traffic, or file downloading traffic. This
utilized 12Kb of memory storage. This result shows that classification of network events will enhance a forensics
polynomial processes utilize roughly six times less mem- investigator’s ability to quickly determine what events
ory storage than discrete processes. This size difference have transpired on a network.
indicates that storing network traffic as polynomials
instead of a collection of individual points significantly Acknowledgements
saves memory. This outcome is important in network This research was conducted with the guidance of Kev-
forensics because network events can be archived for a in Fairbanks and Henry Owen and supported in part
longer amount of time than before. This extra storage by a Georgia Tech President’s Undergraduate Research
allows for more extensive and detailed investigations. Award as a part of the Undergraduate Research Oppor-
tunities Program. This research was also supported in
conclusion part by the Georgia Tech Department of Electrical and
Networks can be approximated using piecewise polyno- Computer Engineering’s Opportunity Research Schol-
mials with enough detail to aid forensic investigators. ars Program.
The precision of the approximation depends directly
on the order of the polynomial used to approximate the
data. In general, the higher the order the more details
are revealed. Networks behave differently and therefore
every network analyzed needs its own set of polynomi-
als to approximate their respective network events. The
use of piecewise polynomials is also beneficial because
polynomials use roughly six times less memory than in-
dividual data points.

Spring 2010: The Tower


36
References
Vicka C, Peterman C, Shearin S, Greenberg MS, Bok-
kelen J, (2002)Network Forensics Analysis, IEEE Inter-
net Computing, http://computer .org/internet/, De-
cember 2002
Scott HJ, (2007)Network Forensics: Methods, Re-
quirements, and Tools, www.Bitcricket.com, November
2007
Ilow J, (2000)Forecasting Network traffic using FARI-
MA models with heavy tailed innovations, Proceedings
of the Acoustics, Speech, and Signals Processing, 2000.
On the IEEE International Conference-Volume 6, IEEE
Computer Society, Washington DC, pp. 3814-3817
Mahoney MV, Chan PK, (2008) Learning Models of
Network Traffic for Detecting Novel Attacks, Florida
Institute of Technology, www.cs.fit.edu/~mmah oney/
paper5.pdf
Parry RL, (2009) North Korea Launches Massive Cy-
ber Attack on Seoul, The Times, http://www.timeson-
line.co.uk/tol/news/world/asia/article6667440.ece,
July 2009
Shah K, Jonckheere E, Bohacek S, (2006) Dynamic
Modeling of Internet Traffic for Intrusion Detection,
EURASIP Journal on Advances in Signal Processing
Volume 2007, Hindawi Publishing Corporation, May
2006
Sorkin, (2009) Identity Theft Statistics, spamlaws.com
Thonnard O, Dacier M, (2008) A framework for attack
pattern discovery in honeynet data, The International
Journal of Digital Forensics and Incidence Response,
Science Direct, Baltimore, Maryland, August 2008
Wang J, (2007)A Novel Associative Memory System
Based Modeling and Prediction of TCP Network Traf-
fic, Advances in Neural Networks, Springer Berlin, July
2007

Article: Sanders
Characterization of the biomechanics

37
of the GPIbα-vWF tether bond using
von Willebrand Disease causing mutations
R687E and wt vWF A1A2A3
Venkata Sitarama Damaraju
School of Biomedical Engineering
Georgia Institute of Technology

Platelet aggregation plays an important role in controlling bleeding by forming a he-


mostatic plug in response to vascular injuries. GPIbα is the platelet receptor that medi-
ates the initial response to vascular injuries by tethering to the von Willebrand factor
(vWF) on exposed subendothelium. When this occurs, platelets roll and then firmly
adhere to the surface through the GPIIb-IIIa integrin present on the platelet surface. A
hemostatic plug then forms by the aggregation of bound and free platelets which then
seals the injury site.
vWF is a multimer of many monomers, with each containing eleven domains. In this
experiment, biomechanics of two of the eleven domains, gain of function (GOF) R687E
vWF-A1 and wild type (wt) vWF-A1A2A3, were studied using videomicroscopy un-
der varying shear stresses. This experiment used a parallel flow chamber coated along
one surface with the vWF ligand. A solution containing platelets or Chinese Hamster
Ovary (CHO) cells was perfused at varying shear stresses (0.5 dynes/cm2 to 512 dynes/
cm2) and cell-ligand interactions were recorded.
Results showed that GOF R687E vWF exhibited slip bond behavior with increasing
shear stress, whereas wt A1A2A3 vWF displayed a catch-slip bond transition with
varying shear stresses. Interestingly, wt A1A2A3 vWF displayed two complete cycles of
catch-slip bond behavior, which could be attributed to the structural complexity of the
vWF ligand. However, more experiments need to be performed to further substantiate
these claims. Information on the bonding behavior of each vWF can aid understanding
of the biomechanics of the entire vWF molecule and associated diseases.

Advisor:
Larry v. Mcintire
School of Biomedical Engineering
Georgia Institute of Technology
Spring 2010: The Tower
38
Introduction joints or soft tissues including muscle and brain (Sadler,
Circulating platelets have an important role in healing 1998).
vascular injuries by tethering, rolling, and adhering to
the vascular surface in response to a vascular injury. Un- vWF is a multimer of many monomers, with each con-
der normal physiological conditions, platelets respond taining eleven (11) domains (Figure 1) (Berndt, 2000).
to a series of signaling events that cause bound plate- In this experiment, biomechanics of two of the 11 do-
lets to aggregate and spread across the exposed surface mains, in particular, gain of function (GOF) R687E
to form a hemostatic plug (Andrews, 1997). These re- vWF-A1 and wild type (wt) vWF-A1A2A3 were stud-
sponses are mediated by receptor-ligand interactions ied. Biomechanics of the GPIbα-vWF tether bond of
between the platelet and the molecules exposed on these molecules was studied using videomicroscopy in
the surface. GPIbα is the platelet receptor that medi- parallel plate flow chamber experiments. One of the two
ates this initial response to vascular injuries. In arteries surfaces of the flow chamber was a 35-mm tissue culture
this response is initiated when platelet receptor GPIbα dish coated with the vWF ligand (Figure 2). Fluid con-
tethers to von Willebrand factor, a blood glycoprotein,
on exposed subendothelium—the surface between en-
dothelium and artery membrane. When GPIbα initially
tethers to von Willebrand factor (vWF), platelets first
roll and then firmly adhere to the surface through the
GPIbα and GPIIb-IIIa integrins present on the plate-
let. GPIbα and GPIIb-IIIa integrins are the first two
platelet integrins to interact with vWF molecule (Kroll,
1996). Aggregation of bound platelets with additional
platelets from the plasma forms a hemostatic plug that
seals the injury site (Ruggeri, 1997).
Mutations in either of these binding partners can result
in changes in the initial step of the vascular healing pro-
cess. Diseases associated with these mutations are called
von Willebrand diseases, which can either decrease (loss
of function) or enhance (gain of function) the binding
activity between the GPIbα and vWF molecules. von
Willebrand diseases (VWD) result in a platelet dys-
function that can cause nose bleeding, skin bruises and
hematomas, prolonged bleeding from trivial wounds,
oral cavity bleeding, and excessive menstrual bleeding.
Though rare, severe deficiencies in vWF can have symp-
toms characteristic of hemophilia, such as bleeding into Figure 1. The vWF molcule. It is a multimer of many mono-
mers, with each containing 11 domains. Image adapted from
Sadler.

Article: Damajaru
39
Non-interacting platelets Figure 2. TParallel plate flow chamber setup.
Upper flow chamber surface
The floor (bottom) plate in the setup was a
Platelets
35-mm tissue culture dish coated with vWF
expressing ligand. Fluid containing either CHO cells or
GPIbα platelets was perfused at varying shear stresses
Fluid flow
(0.5 dynes/cm2 to 512 dynes cm2) across the
ligand coated surface.
Flow chamber floor
coated with vWF ligand

Interacting Platelets

taining either platelets or Chinese Hamster Ovary cells contain specific integrin that interact with the vWF li-
were perfused at varying shear stresses across this ligand gand, whereas β9 cells do not. Hence, β9 cells served as
coated surface and the interactions were recorded using a control group when CHO cells were used instead of
high speed videomicroscopy. Analysis of these interac- platelets.
tions with cell tracking software allowed insight on
Preparation of groth media
bond lifetime of the cells and helped suggest the type of
Two types of growth media were prepared for the two
bond present (Yago, 2004).
types of CHO cells: αβ9 and β9. Both media formula-
By studying the biomechanics of individual vWF do- tions consisted of alpha-Minimum Essential Medium
mains, it will allow a better understanding of the whole (α-MEM) solution (with 2mM L-glutamine and NaH-
vWF molecule, and more importantly, VWD. With this CO3), 10% Fetal Bovine Serum (FBS) solution, peni-
enhanced understanding of vWF, better and more ac- cillin solution (50X), streptomycin solution (50X),
G-418 (Geneticin) solution (50 mg/mL), and metho-
curate treatments for VWD can be designed in future.
trexate powder. The only difference between the two
This knowledge can also be used in studying and pre-
media types was the addition of hygromycin B solution
venting life threatening thrombosis and embolism. (50 mg/mL) in the αβ9 media.
Materials and methods
Passaging cells
All materials were obtained from the McIntire labora-
Proper sterile techniques and precautions were used
tory stock room. Proper sterile techniques and precau-
while passaging CHO αβ9 and CHO β9 cells. CHO
tions were used for each of the following procedures.
cells were cultured in 75 cm2 flasks and incubated at
37° Celsius, 5% CO2 using the growth media prepared.
Cells Used
These cells were passaged every 2-3 days in order to
Either Chinese Hamster Ovary (CHO) cells or fresh
maintain 80-90% confluency of cells at all time.
platelets were used to study the vWF ligand interac-
tions. Fresh platelets were isolated from blood donors
Hepes-Tyrode buffer formations
an hour before the experiment. For CHO cells, two
Hepes-Tyrode buffer (also referred as 0% Ficoll) was pre-
specific lineages, αβ9 and β9, were used. CHO αβ9 cells

Spring 2010: The Tower


40
pared by mixing the following chemicals in pure deion- Tracking cell interactions
ized water until completely solvated: Sodium chloride MetaMorph Offline software was used to track the in-
(135 mM), sodium bicarbonate (12 mM), potassium teractions captured with videomicroscopy. Each 4-sec-
chloride (2.9 mM), sodium phosphate monobasic (0.34 ond video was opened using this software and a square
mM), hepes (5 mM), glucose D (5 mM), and BSA (1% was drawn around the CHO cell or platelet (hereon
weight per volume). CHO cells and platelets were sus- referred as “cell”) of interest. Each cell was tracked for
pended in this buffer for the flow chamber experiments. at least 250 continuous frames (1 second). In addition,
This solution consisting of cells and buffer was pumped it was ensured that no other cell bumped into the cell
through the flow chamber at various shear stresses. of interest while it was being tracked by observing the
video.
Ficoll Solution
A more viscous Hepes-Tyrode buffer was prepared by Data analysis
adding 6% Ficoll (weight per volume). The final viscos- The tracked results from MetaMorph Offline were
ity was 1.8 times that of Hepes Tyrode buffer. This ficoll saved as multiple number strings in Microsoft Excel and
solution is also referred as 6% Ficoll. processed through MATLAB to compute mean rolling
velocities for each shear stress. The mean rolling veloc-
Parallel plate flow chamber experiments ity suggests how fast the cell is rolling while interacting
A parallel plate flow chamber was used in this experi- with the vWF ligand at each individual shear stress.
ment. One of the two surfaces of the flow chamber These rolling velocities were graphed in Microsoft Excel
was a 35-mm tissue culture dish coated with the vWF for each shear stress in a logarithmic scale.
ligand. Fluid containing either CHO cells or platelets
was perfused at varying shear stresses (0.5 dynes/cm2 to Results
512 dynes/cm2) across the ligand coated surface (Figure GPIbα and von Willebrand factor (vWF) interactions
2). The interactions were recorded as 4-second videos at were recorded using videomicroscopy in parallel plate
250 frames/second using high speed videomicroscopy flow chamber experiments. These interactions were
(Figure 3). The parallel plate flow chamber set-up was then tracked using MetaMorph Offline, the tracking
maintained at 37° Celsius for all experiments. software, for at least 250 continuous frames (1 second).

Non-rolling cells Rolling cells


Figure 3. A snap shot of the
videomicroscopy recording
of CHO cells interacting on
a vWF coated surface. The
non-rolling cells free flow
over the surface without any
interactions. In contrast, the
rolling cells are visibly flow-
ing slower as they interact
with the vWF ligand.

Article: Damajaru
41

Figure 4. A mean rolling velocity of platelets on gain on function (GOF) R687E A1. The x-axis rep-
resents the logarithmic shear stress (dynes/cm2) while the y-axis represents the mean rolling velocity
(µm/s). The error bars represent the Standard error of means. SEM. Increasing velocity
suggests a slip bond behaviors of GOF R687E.

Figure 5. Mean rolling velocity of platlets on gain of function (GOF) R687E A1. The x-axis repre-
sents the logarithmic shear stress (dynes/cm2) while the y-axis represents the mean rolling velocity
(µm/s). Error bars represent the Standard error of means, SEM. The increasity velocity
suggests a slip bond behavior of GOF R687E.

Spring 2010: The Tower


42

Figure 6a and 6b. Mean rolling velocity platelets on wt A1A2A3 vWF. Y-axis in both 6a and 6b represent the mean rolling velocity
(µm/s), whereas the x-axis in 6a represents the logarithmic shear stress (dynes/cm2) and in 6b represents the logarith-
mic shear rate (s-1). The error bars represent the Standard error of means, SEM. This cycle of decreasing and increasing mean
rolling velocity is indicative of a catch-slip bond interaction between GPlba and vWF ligand.

The results from MetaMorph Offline were processed locity is characteristic of a slip bond interaction, because
through MATLAB to compute the mean rolling ve- the molecules tend to “slip off ” the ligand more readily
locities for each shear stress used (0.5 dynes/cm2 to 512 at higher shear stress than at lower shear stress.
dynes/cm2). In order to learn about the GPIbα-vWF
A separate experiment performed with platelets from a
tether bond interaction, mean rolling velocities were
different donor on GOF R687E vWF showed a simi-
plotted versus shear stress for each individual experi-
lar slip bond interaction (Figure 5). Although less data
ment.
points were collected in this experiment, it showed a
Plotting the results for platelets interacting on gain of similar increase in mean rolling velocity with increas-
function (GOF) mutant R687E vWF-A1 molecule re- ing shear stress. A statistical analysis of these two data
vealed a trend of increasing mean rolling velocities with sets revealed a pearson correlation factor of 0.98 and a
increasing shear stress (Figure 4). The x-axis represents p-value greater than 0.05 for a paired t-test. Therefore,
the logarithmic shear stress (dynes/cm2) while the y-axis reproducability of this trend affirmed the slip bond
represents the mean rolling velocity (µm/s). The error characteristic of GOF R687E vWF-A1 molecule.
bars are the standard error of mean (SEM), which is cal-
Outliers at high shear stress are attributed to the fact
culated by dividing the standard deviation by the square
that bond lifetime significantly decreases at higher shear
root of number of samples (stdev/√(N)).
stress. As a result, fewer platelets interact at those shears
Intuitively, with increasing shear stress the bond lifetime and thus fewer data points were collected at higher
decreases for each individual bond (one-to-one mole- shear stress compared to at lower shear stresses. This
cule interaction between GPIbα and vWF ligand); con- is reflected with the large SEM bars for data points at
sequently, causing the mean rolling velocity to increase higher shear stress end. Similarly, mean rolling veloc-
at higher shear stresses. This increase in mean rolling ve- ity at the lowest shear stress are also variable because of

Article: Damajaru
43
the diffuclty in distinguishing interacting platelets from results because CHO cells contain isolated GPIbα re-
non-interacting platelets. For both experiments, plate- ceptors, whereas platelets have many molecules on their
lets were suspended in Hepes Tyrode buffer. surface. Thus, the mean rolling velocities are compara-
bly different between them and not comparable.
In contrast, platelets interacting on wild type (wt)
A1A2A3 vWF molecule (Figures 6a and 6b) showed a
different trend compared to GOF R687E vWF. Y-axis Discussion
in both Figures 6a and 6b represent the mean rolling Fresh platelets and wild type (wt) Chinese Hamster
velocity (µm/s), whereas the x-axis in 6a represents the Ovary (CHO) cells were used on gain of function
logarithmic shear stress (dynes/cm2) and in 6b repre- (GOF) R687E vWF or wt A1A2A3 vWF in order to
sents the logarithmic shear rate (s-1). The error bars study some aspects of the GPIbα-vWF tether bond. Par-
represent the SEM. As illustrated by the graphs, mean allel plate flow chamber experiments were the same for
rolling velocity initially decreased, then increased and each vWF molecule. The only difference was the fluid
decreased only to increase again with increasing shear passing through had either platelets or wt CHO cells.
stress (and shear rate). This cycle of decreasing and in- All rolling interactions were observed at 250 frames per
creasing mean rolling velocity is indicative of a catch- second.
slip bond interaction between GPIbα and vWF ligand. It was previously found that wild type-wild type (wt
A decrease in mean rolling velocity correlates with an GPIbα on wt vWF) interactions differ from wt-GOF
increase in bond lifetime of individual bond, and thus (wt GPIbα on GOF vWF) interactions. An additional
indicating a catch bond because the platelet is “caught” experiment (Appendix A, Figure A1) shows platelets
by the ligand. Likewise, an increasing mean rolling ve- on wt vWF-A1. This graph shows a transition of bond-
locity implies a decrease in bond lifetime of the indi- ing behavior from a region of decreasing rolling veloc-
vidual bond as a slip bond interaction. Platelets on wt ity to an increasing rolling velocity as the shear stress
A1A2A3 vWF exhibited two complete cycles of catch- increases. This trend is indicative of a catch-slip bond
slip bond interaction for the range of shear stress mea- transition because the rolling velocity decreases (catch
sured (0.5 dynes/cm2 to 512 dynes/cm2). For this par- behavior) and then increases (slip behavior) with in-
ticular experiment, platelets were suspended in Hepes creased shear stress.
Tyrode buffer (0% Ficoll) and 6% Ficoll solution. Sus- However, results from platelets on GOF R687E vWF
pending platelets in a more viscous solution was used to (Figures 4 and 5) showed an increase in rolling veloci-
verify whether the catch-slip bond was force dependent ties with increased shear stress—indicating only a slip
or transport dependent. bond behavior. This suggests that a catch bond governs
A similar catch-slip bond interaction was illustrated low force binding behavior between wt GPIbα and wt
with Chinese Hamster Ovary (CHO) cells interacting vWF-A1; whereas a slip bond governs binding of GOF
on wt A1A2A3 vWF (Figure 7). Although fewer data R687E at high shear stresses. One possible reason for
points were collected for this experiment, it still dem- this could be the differential force response of the bond
onstrated two cycles of decreasing and then increasing lifetime.
mean rolling velocity with increasing shear stress. No Results of platelets rolling on wt A1A2A3 vWF (Fig-
statistical analysis was performed between these two ures 6a and 6b) showed two complete cycles of bonding

Spring 2010: The Tower


44
behavior from a region of decreasing rolling velocity to
increasing rolling velocity as shear stress increased. This
cycle of decreasing and increasing mean rolling veloc-
ity is indicative of a catch-slip bond interaction between
GPIbα and vWF ligand. When rolling velocity decreas-
es, it is indicative of a catch bond, suggesting the bonds
are stuck or caught on the ligand and hence slowing its
velocity. Likewise, when the rolling velocity increases, it
indicates a slip bond because the bond comes off or slips
off much quicker and consequently increases the roll-
ing velocity. Based on previous knowledge (Figure A1),
this catch-slip bond behavior can be identified with
the presence of wt A1 thedomain in the wt A1A2A3
vWF ligand. However, two observed complete cycles of
Figure 7. Mean rolling velocity of CHO cells on wt A1A2A3 catch-slip bond might be due to the structural complex-
vWF. X-axis represents the logarithmic shear stress (dynes/ ity of the complete A1A2A3 vWF ligand.
cm2) while the y-axis represents the mean rolling velocity
(µm/s). The error bars represent the Standard error of
Also, viscosity of the fluid in which platelets were sus-
menas, SEM. This cycle of decreasing and increasing pended was increased by 1.8 times. By comparing the
mean rolling velocity is indicative of a catch-slip bond results from these two different solutions (0% Ficoll
interaction between GPlba and vWF ligand. and 6% Ficoll), it helped determine whether if this
catch-slip bond interaction was force dependent or
transport dependent. Figures 6a and 6b show a boxed

Figure A1. Results of platelets on wt-A1 vWF. Y-axis in both left and right plots represent the mean rolling velocity (µm/s),
whereas the x-axisin left represents the logarithmic shear stress (dynes/cm2) and in right represents the logarithmic
shear rate (s-1). These plots show a catch-slip bond interaction, the the rolling velocity decreases and then increases
with increases shear stress (and shear rate).

Article: Damajaru
45
region where the data points for the two different so- uted to the differential force response of bond lifetime
lutions (with 0% Ficoll and 6% Ficoll) align or overlap between GPIbα and GOF vWF ligand with increasing
when plotted together. Since shear stress data aligns the shear stress.
best compared to shear rate data, it indicates that force,
In addition, studying wt A1A2A3 vWF on platelets and
which regulates shear stress, is probably what governs
wt CHO cells revealed two complete cycles of catch-slip
this catch-slip bond interaction.
bond behavior (Figure 6-7). Based on previous knowl-
A similar catch-slip bond interaction was observed be- edge, this catch-slip bond behavior can be identified
tween wt CHO cells on wt A1A2A3 vWF (Figure 7). with the presence of wt A1domain in A1A2A3 ligand.
The bond behavior transitions from a region of decreas- However, having two cycles of catch-slip bond behav-
ing rolling velocity to a region of increasing rolling ve- ior can be due to the structural complexity of A1A2A3
locity. CHO cells have isolated GPIbα receptors, which vWF ligand.
allows for the isolation of the GPIbα receptor’s contri-
In future studies, more experiments need to be per-
bution to the rolling velocity parameter, since platelets
formed with wt A1A2A3 vWF on platelets and CHO
have many molecules on their surface. Thus, this trend
cells in order to confirm the reproducibility of the results
could be attributed to the GPIbα receptor’s interactions
achieved. More data is needed to support the claim that
with the vWF molecules and A1A2A3 structure.
having two cycles of catch-slip bond can be attributed
Overall, bond behaviors of the two vWF domain, GOF to the structural complexity of A1A2A3 vWF ligand.
R687E and wt A1A2A3, were successfully characterized. Similarly, more experiments involving GOF R687E
Although the bonding trends of the vWF ligand appear vWF on platelets and CHO cells will further substanti-
very obvious, more testing will help further substantiate ate the slip bond behavior of GOF vWF. By studying
these claims. Based on the results, the next step would biomechanics and bond behavior of each domain of the
be assessing how these bond types adversely affect plate- vWF molecule, it will allow a better understanding of
let aggregation in presence of a vascular injury. By deter- vWF and VWD.
mining the adverse effects of different bond type in each
vWF domain, it will further help understand VWD and
its causes and potentially lead to a treatment.

Conclusion
Some valuable information on tether bonding between
GPIbα-vWF, specifically GOF R687E vWF and wt
A1A2A3 vWF, was acquired from the four set of ex-
periments performed. Results from two experiments
revealed a pure slip bond behavior for platelets rolling
on GOF R687E (Figure 4-5). Statistical analysis also
showed a strong correlation and p-value greater than
0.05 between the two experiments involving GOF
R687E vWF; hence, confirming the reproducibility of
slip bond behavior. This slip bond behavior is attrib-

Spring 2010: The Tower


46
References
Andrews R K, Lopez J A, Berndt M C (1997) Molecu-
lar Mechanisms of Platelet Adhesion and Activation.
International Journal of Biochem Cell Biology 29: 91-
115.
Berndt M, Ward CM (2000) Platelets, Thrombosis, and
the Vessel Wall. Vol. 6. Harwood Academic.
Kroll M, Hellums D, McIntire L, Schafer A, Moake J
(1996) Platelets and Shear Stress. The Journal of The
American Society of Hematology 88.5: 1525-541.
Ruggeri Z (1997) Von Willebrand Factor - Cell Ad-
hesion in Vascular Biology. The American Society for
Clinical Investigation 99: 559-564.
Sadler J (1998) Biochemistry and genetics of von ville-
brand factor. Annual Reviews 67: 395-424.
Yago T, Wu J, Wey C, Klopocki A, Zhu C, McEver R
(2004) Catch Bonds Govern Adhesion Through L-Se-
lectin At Threshold Shear. The Journal of Cell Biology
166: 913-924.

Article: Damajaru
Moral Hazard and the Soft Budget

47
Constraint: A game-theoretic look at the
primal cause of the sub-prime
mortgage crisis
Akshay Kotak
School of Economics and School of Industrial & Systems Engineering
Georgia Institute of Technology

This paper addresses one of the major causes of the sub-prime mortgage crisis prevalent
in large American mortgage houses by the end of 2006. The moral hazard scenario and
consequent malpractices are addressed with respect to the soft budget constraint. This
analysis is done by first looking at the Dewatripont and Maskin model (1995), and
then suitably modifying it to model the scenario at a typical mortgage lender. This sim-
plistic model provides useful insight into how heightened bailout expectations, caused
by precedent actions by the Federal Reserve, fueled risky behavior at banks who thought
themselves to be “too-large-to-fail.”

Advisor:
Emilson C. Silva
School of Economics
Georgia Institute of Technology
Spring 2010: The Tower
48
Introduction used to explain several phenomena and crises in the
Over the last two decades there has been considerable capitalist world. While initially used to explain short-
interest in the study of financial crises and instabil- age in socialist economies, the SBC has been usefully
ity owing largely to the prevalence of financial crises in sought to provide explanations for the Mexican crisis of
the recent past. As Alan Greenspan observed, after the 1994, the collapse of the banking sector of East Asian
collapse of the Soviet Bloc at the end of the Cold War, economies in the 1990’s, and the collapse of the Long
market capitalism spread rapidly through the develop- Term Credit Bank of Japan.
ing world, largely displacing the discredited doctrine of
The soft budget constraint syndrome is said to arise
central planning (Greenspan 2007). This abrupt transi-
when a seemingly unprofitable enterprise is bailed out
tion led to explosive growth that was at times too hot
by the government or its creditors. This injection of
to handle and inadequately controlled, causing several
capital in dire situations ‘softens’ the budget constraint
crises in the Third World, most notably in East Asia in
for the enterprise – the amount of capital it has to work
1997 and Russia in 1998. Additionally, there have been
with is no longer a hard, fixed amount. There is a host of
periods of economic tumult in the developed world in-
literature, primarily developed from a model designed
cluding the near collapse of Japan in 1990’s, the bailout
by Mathias Dewatripont and Eric Maskin, which fo-
of Long Term Capital Management by the Federal Re-
cuses on the moral hazard issues brought about when
serve in 1998, and most recently, the subprime mort-
a government or central bank acts as the lender of last
gage crisis of 2007-08.
resort to financial institutions (Kornai et al. 2003).
As Dimitrios Tsomocos highlights in his paper on finan-
cial instability, “[t]he difficulty in analyzing financial
Background
instability lies in the fact that most of the crises mani-
The subprime mortgage crisis of 2007 was marked by
fest themselves in a unique manner and almost always
a sharp rise in United States home foreclosures at the
require different policies for their tackling” (Tsomocos
end of 2006 and became a global financial crisis during
2003). Most explanations, however, are modeled on a
2007 and 2008. The crisis began with the bursting of
game-theoretic framework involving a moral hazard
the speculative bubble in the US housing market and
scenario brought about by asymmetric information.
high default rates on subprime adjustable rate mortgag-
This choice of framework has been popular because of
es made to higher-risk borrowers with lower income or
its ability to predict equilibrium behavior (under rea-
lesser credit history than prime borrowers.
sonable assumptions) for a given scenario and explain
qualitatively and mathematically why and when devia- Several causes for the proliferation of this crisis to all
tions from this behavior occur. sectors of the economy have been delineated, including
excessive speculative investment in the US real estate
This paper aims to perform a similar introductory anal-
market, the overly risky bets investment houses placed
ysis of one of the underlying causes of the current global
on mortgage backed securities and credit swaps, inac-
economic crisis — subprime mortgage lending activity
curate credit ratings and valuation of these securities,
in the US from 2001-07 — in light of the soft budget
and the inability of the Securities and Exchange Com-
constraint (SBC). The soft budget constraint syndrome,
mission to monitor and audit the level of debt and risk
identified by János Kornai in his study of economic be-
borne by large financial institutions. It would be fair to
havior of centrally-planned economies (1986), has been

Article: kotak
49
say, however, that one of the most fundamental causes in August ’07 to 3.0% in February ’08 and subsequently
of the entire debacle was the lending practices prevalent down all the way to 0.25% in December ’08 (Historical
in mortgage houses in the US by the end of 2006 and Changes, 2008).
the free hand given to these lenders to continue their
As the single largest mortgage financing institution
practices. While securitization produced complex de-
in the US, Countrywide Financial felt the heat of the
rivatives from these mortgages that were incorrectly val-
subprime crisis more than a lot of the other affected fi-
ued and risk-appraised, it was ultimately the misguided
nancial institutions. Faced with the double whammy of
decisions made by mortgage lenders that caused default
a housing market crash and the stiff credit crunch, the
rates to rise when the housing bubble burst, eroding the
company found itself in a downward spiral, with a rise
value of the underlying assets and setting off a chain re-
in readjusted mortgage rates increasing the number of
action in the financial sector.
foreclosures which eroded profits.
With housing prices on the rise since 2001, borrowers
In the case of Countrywide Financial and other large
were encouraged to assume adjustable-rate mortgages
finance corporations who considered themselves “too-
(ARM) or hybrid mortgages, believing they would be
large-to-fail,” the expectation of the downside risk cov-
able to refinance at more favorable terms later. How-
erage was raised to a level that promoted substantial
ever, once housing prices started to drop moderately in
risk-taking. This expectation was based off of precedent
2006-2007 in many parts of the U.S., refinancing be-
actions by the Federal Reserve in bailing out distressed
came more difficult. Defaults and foreclosures increased
large firms – dubbed the Greenspan (and now, the Ber-
dramatically as ARM interest rates reset higher. During
nanke) put. Thomas Walker (2008), in his article in The
2007, nearly 1.3 million U.S. housing properties were
Wall Street Journal, aptly says,
subject to foreclosure activity, up 75% versus 2006. (US
Foreclosure Activity 2007).
There is tremendous irony, and common sense,
Primary mortgage lenders had passed a lot of the de-
in the realization that multiple successful rescues
fault risk of subprime loans to third party investors
of the financial system by the Fed over several
through securitization, issuing mortgage-backed securi-
decades will eventually create a risk-taking cul-
ties (MBS) and collateralized debt obligations (CDO).
ture that even the Fed will no longer be able to
Therefore, as the housing market soured, the effects
single-handedly save, at least not without serious
of higher defaults and foreclosures began to tell sig-
inflationary consequences or help from foreign-
nificantly on financial markets and especially on major
ers to avoid a dollar collapse. Eventually the cul-
banks and other financial institutions, both domesti-
ture will overwhelm the ability of the authorities
cally and abroad. These banks and funds have reported
to make it all better.
losses of more than U.S. $500 billion as of August 2008
(Onaran 2008). This heavy setback to the financial sec-
Ethan Penner of Nomura Capital provides a succinct
tor ultimately led to a stock markets decline. This dou-
and veracious definition of the moral hazard dilemma
ble downturn in the housing and stock markets fuelled
in saying that, “Consequences not suffered from bad de-
recession fears in the US, with spillover effects in other
cisions lead to lessons not learned, which leads to bigger
economies, and prompted the Federal Reserve to cut
failings down the road (Penner 2008).”
down short term interest rates significantly, from 5.25%

Spring 2010: The Tower


50
the dewatripont maskin A graphical representation of the timing and structure
(DM) model of the DM model is shown in Figure 1.
Mathias Dewatripont and Eric Maskin developed a The fairly simple model proposed by Dewatripont and
model in 1995 to explain the softening of the budget Maskin, when suitably tweaked, may be used to explain
constraint under centralized and decentralized credit a number of phenomena in both capitalist and socialist
markets (Dewatripont et al. 1995). The simplest version economies. The model was originally designed to assess
of their model is a two-period model, with the key play- how decentralizing the credit market (under some fairly
ers being a banker that serves as the source of capital to reasonable assumptions about the comparative nature of
each of a set of entrepreneurs that require funding to Rg and Rp) will harden the budget constraint — mak-
undertake projects. At the beginning of period 1, each ing markets more efficient — by adding incentive to en-
of the entrepreneurs chooses a project to submit for trepreneurs to not submit poor projects for financing.
financing, and projects may be defined as one of two
types: good or poor. The probability of a project being For specific application to this study, the model will
good is α. The asymmetry in information lies in the fact be used to study the moral hazard scenario that comes
that once selected, only the entrepreneur knows of the about when financial institutions consider themselves
type of the project, i.e. the banker is unable to monitor “too-large-to-fail.” These institutions, Long Term Capi-
the project beforehand. The entrepreneur has no bar- tal Management in the late 1990’s and, more recently,
gaining power and the banker, if he decides to finance Countrywide Financial, are insured to some measure in
the project, makes a take-it-or-leave-it offer. the sense that their multi-billion-dollar positions can
affect financial markets so heavily that, in the case of a
Set-up funding costs 1 unit of capital. The banker is
able to learn the nature of a project once he funds set-
up during period 1. A good project, once funded, yields
a monetary return of Rg (>0) and a private benefit Bg
(>0) for the entrepreneur; by the beginning of period 2,
gains can be made through private benefits, which may
include intangibles such as reputation enhancement.
A poor project, on the other hand, yields a monetary
return of 0 by the beginning of period 2. If the banker
ends up dealing with a poor project, he, at the beginning
of period 2, has the option of either liquidating the proj-
ect’s assets to obtain a liquidation value L (>0) while
the entrepreneur earns a private benefit BL (<0), since
liquidation would imply a loss in reputation. The other
option the banker has is to refinance the project, which
would require the injection of another unit of capital at
the beginning of period 2. Now the gross return is Rp Figure 1. The Structure of the Dewatripont Maskin
and private benefit to the entrepreneur is Bp (>0). Model (Source: Kornai et al., 2003)

Article: kotak
51
downturn, large private banks, the central bank, or the for bad market conditions. This revenue shrinkage fac-
government would be forced to bail them out to avoid tor (δ) can be thought of as an indicator of the bank’s
a financial meltdown. This insurance against downside downside risk coverage. In the current framework, it is
risk stimulates the moral hazard scenario and gives in- affected by two key factors:
centive to these financial institutions to make much
1. Collateral requirements: Higher collateral would
riskier bets with higher potential return.
imply more downside risk coverage (i.e. higher δ) but
would also reduce the quantity of loans demanded since
Methodology fewer people would be able to pay the required collater-
The game-theoretic model used in this study has two al for the same loan. The bank would therefore evaluate
key players – the borrowing entity (“borrower”) and the benefit (potential revenue) of additional loans with
the lending entity (“the bank”). Additionally, the study the cost (increased risk) to choose the ideal collateral
looks at the effects of the presence of a lender of last requirement for the ARM. This cost-benefit analysis is
resort. Borrowers, assumed to be identical, can choose however outside the scope of this study, and δ is there-
from two types of loans offered by the bank – a fixed fore assumed to be exogenous.
rate loan with principal Lf and an adjustable rate loan 2. Bail-out expectations: Increased expectations of a
with principal La. Customer utility (U(x)) in the typical bail-out (i.e. a cash injection in case of bad market con-
concave functional form – increasing with decreasing ditions) would also raise the value of δ, but without
marginal returns (i.e. U'>0, U''<0), is simplified in this shrinkage in loan demand.
model to be the natural logarithm function.
The game is played between borrowers and the bank
The fixed rate loan has an interest rate rf . The adjustable with equilibrium being reached by the bank setting λ
rate loan is assumed to have an initial low fixed interest such that borrowers are indifferent to either of the two
rate r0a which is readjusted after a period λ. The remain- loans, and the borrowers opting for a mixed strategy.
der of the adjustable rate loan is paid off at the rate de- The indifferent borrower chooses a fixed rate loan with
termined at the end of period λ. If market conditions are a probability α such that the expected payoff from either
good at this time, the interest rate is adjusted to r1g, and loan is the same for the bank.
if they are bad, the rate is adjusted to r1b. Market condi-
tions are represented in the model by an exogenous vari- This study analyzes the equilibrium of this game under
able θ, which is the probability of the market conditions two scenarios – with and without the presence of a lend-
being good, i.e. of the interest rate reset to being r1g. er of last resort. The presence of a lender of last resort
who is expected to bail the lender out with a cash injec-
The bank and customer convene before a loan is offered tion increases the (perceived) value of δ even though the
to discuss the terms of the ARM. Based on the bank’s level of protection offered to the bank through collateral
expectations about the economy (i.e. θ) and of the val- remains the same. So, in this case, the revenue shrinkage
ues of r1b and r1g, the bank and the customer decide for the second collection period is reduced (Figure 2).
on a fixed initial rate r0a and a period λ for which the
loan is kept fixed. The computation for λ also involves a The optimal loan amount for a fixed rate loan (L*f )
parameter, δ, which reflects the increase in default rate maximizes net utility for the borrower. Net utility is

Spring 2010: The Tower


52

Figure 2. Readjustment of adjustable rate loans after the initial period, λ.

the difference between the utility gained from the loan Therefore, the optimal loan amount for an adjustable
amount less the total interest paid over the lifetime of rate loan,
the load. The borrower therefore solves, (2)
{
max U (L f )- rf ·L f } L*a =
1
(λ·r ( ))
Lf
0
a + (1 - λ )· θ ·rg1 + (1 -θ )·δ ·rb1
i.e.

Lf
{
max ln (L f )- rf ·L f } In order to ensure that a mixed strategy is employed at
equilibrium i.e. to have 0<α<1, the bank sets λ such that
borrowers are indifferent to fixed and adjustable rate
which yields,
loans.
(1)
U (L f )- rf ·L f = U (La )- (λ ·ra + (1 - λ )·(θ ·rg + (1 -θ )·δ ·rb ))·La
0 1 1

* 1
L =
f
rf Substituting values from (1) & (2), we obtain,
With an adjustable rate loan, the interest payment for
the average borrower would be
( (
- ln rf -1 = - ln λ ·ra0 + (1 - λ )· θ ·rg1 + (1 -θ )·δ ·rb1 -1 ))

( (
La · λ ·ra0 + (1 - λ )· θ ·rg1 + (1 -θ )·δ ·rb1 )) i.e.
( (
r f = λ ·ra0 + (1 - λ )· θ ·rg1 + (1 -θ )·δ ·rb1 ))

Article: kotak
53
giving us, Given condition (4), the sign on the above expression is
(3) dependent on the sign of (r1g – δ·r1b). Therefore, if

λ = * (
rf - θ ·rg1 + (1 -θ )·δ ·rb1 )
δ<
rg1
ra0 - (θ ·r 1
g + (1 -θ )·δ ·r )
1
b
rb1
then the right hand side of equation (6) would be posi-
tive, implying that an increase in the probability of a
Analysis good market conditions would cause an increase in the
In deriving equation (3) above, we also find that, at equi- amount of time that the loan is kept at the low fixed rate
librium, the net interest rate charged for a fixed loan and r0a. This makes intuitive sense because if
an adjustable loan are the same.
i.e. rg1
δ<
( ))
rb1
0
(
rf = λ ·r + (1 - λ )· θ ·r + (1 -θ )·δ ·r
a
1
g
1
b
then the bank is not adequately covered against down-
Since all interest rates and parameters are positive, and side risk, so even though the probability of good market
since r0a is assumed to be less than rf , the above can only conditions increases, the bank keeps the loan at the fixed
hold true if low rate longer and decreases the length of the period of
(4) uncertain collection, which is subject to downside risk.
One concern that arises is why the bank takes any risk
(
ra0 < rf < θ ·rg1 + (1 -θ )·δ ·rb1 ) in the first place by offering an adjustable rate loan even
though the payoff for this is the same as that for the less
Also, it must hold that, risky fixed rate loan. The reasoning here would be that
(5) adjustable rate loans earn higher commissions, which
compensates to some level for this risk. Additionally,
r < rf < rg1 < rb1
a
0
ARMs are preferred by more customers, and they there-
This is derived from equation (4) and from the fact that, fore add intangible value in terms of higher volumes,
as market conditions worsen, liquidity becomes harder which may lead to lower costs, better customer satisfac-
to obtain and therefore the cost of debt increases. tion and a broader clientele. Also, since the function for
λ is a rational function in θ (see equation 3), we can see
Equilibrium behavior that is of interest is the nature of that the values of r0a, r1b, and r1g need to fall within a
the change in λ with changes in the exogenous param- certain range to ensure that an ARM is feasible i.e. λ lies
eters – θ and δ. The rate of change of λ with respect θ between 0 & 1.
to is
Conversely, if
(6)
rg1
ðλ
=
(
rf - r · r - δ ·r a
0
)( 1
g
1
b ) δ>
rb1
( )
2
ðθ ra0 - δ ·rb1 -θ ·rg1 + δ ·θ ·rb1

Spring 2010: The Tower


54
then the bank is covered against downside risk. The covered against this higher level of risk from its collat-
right hand side of equation (6) is now negative, so an eral collection. Additionally, a rise in δ decreases the
increase in the probability of good market conditions sensitivity of λ with respect to θ. A higher δ therefore
extends the length of the period of uncertain collection. prompts less vigilant observation of market conditions
Additionally, if as small enough changes in market outlook now man-
rg1 date less significant changes in loan structure. Finally, if

δ= δ is raised to high enough value,
rb1
rg1
then the bank is independent of nature of market con- δ>
ditions i.e. independent of θ. However, since δ is not rb1
set arbitrarily by the bank, it cannot always pursue this then the bank begins to make counter-intuitive deci-
strategy of hedging against market risk (Figure 3). sions, and a decrease in the probability of good market
conditions now actually brings about an increase in the
As mentioned earlier, the presence of a lender of last
period of uncertain collection.
resort inflates the value of δ without any collateral in-
crease. From figure 3 we see that as δ increases, there are
three ways in which the bank begins to take up more Conclusion
risk. A rise in the value of δ increases the feasibility of This model, albeit simplistic, provides interesting re-
adjustable rate loans – loans that were not feasible for sults. The model is designed to mimic the basic setup
a given economic outlook (i.e. θ value) now start be- at a typical mortgage house offering a fixed rate loan
coming feasible even though the bank is not adequately and a two-step adjustable rate loan. We are able to show,

Figure 3. Period of fixed rate collection, λ v/s probability of good conditions, θ;


rf = 6.2%, r0a = 4.5%, r1b = 10%, and r1g = 7.2%; δ = 0 to 1, in increments of 0.1.

Article: kotak
55
mathematically, that an increase in the expectation of a
bailout by a lender of last resort tends to encourage risky
behavior in such mortgage offering agencies in multiple
ways. That being said, there is plenty of scope for fur-
ther elaboration and sophistication of the model. The
market structure currently under investigation is both
simplistic and insular, but a more elaborate structure of
markets and corresponding interactions could be de-
signed. For instance, a good example of possible market
stratification is illustrated in Tsomocos (2003). Also, the
current loan structure is a two period model with loans
changing rates at the end of period one to a new fixed
rate for period two. A more complex, multi-period loan
structure could be investigated with the adjustable rate
set as a random variable and a Markov chain approach
used to study the equilibrium behavior in this scenario.
In their investigation of “federalism,” Qian and Ro-
land (1988) observe that giving fiscal authority to local
governments instead of the central government works
to limit the effects of the soft budget syndrome. They
propose a three-tiered structure with local governments
working between the central government and state and
non-state enterprises. The competition among local
governments to attract enterprises forces funds to be
diverted in infrastructure development, increasing the
opportunity cost of a bailout and thereby hardening
the budget constraint for enterprises. A similar scenario
could be envisioned where the Federal Reserve dis-
tributes the decision making authority (and funds) to
bailout corporations between the twelve regional Fed-
eral Reserve Banks; and would be of interest to study
the subsequent change in the behavior of the lending
banks.

Spring 2010: The Tower


56
References Qian, Y. and Roland, G. (1998). Federalism and the
Demyanyk, Y & Van Hemert, O. (2008). Understand- Soft Budget Constraint. American Economic Review,
ing the Subprime Mortgage Crisis. Retrieved Decem- 88(5), 1143-62.
ber 9, 2008, <http://ssrn.com/abstract=1020396>
Tsomocos, D. (2003). Equilibrium analysis, bank-
Dewatripont, M. & Maskin, E. (1995). Credit and Ef- ing and financial instability. Journal of Mathematical
ficiency in Centralized and Decentralized Economies. Economics, 39, 619-655.
The Review of Economic Studies, 62(4), 541-555.
U.S. Foreclosure Activity up 75 Percent in 2007.
Greenspan, A. (2007, December 12). The Roots of the (2007). Retrieved March 14, 2008, <http://www.real-
Mortgage Crisis. The Wall Street Journal, p. A19 tytrac.com/ContentManagement/RealtyTracLibrary.a
spx?a=b&ItemID=4118&accnt=64953>
Historical Changes of the Target Federal Funds and
Discount Rates: 1971 - present. (2008). Retrieved Walker, T. (2008, April 23). Our Bailout Culture
December 12, 2008, <http://www.newyorkfed.org/ Creates a Huge Moral Hazard. The Wall Street Journal,
markets/statistics/dlyrates/fedrate.html> p. A16

Kornai, J. (1979). Resource-Constrained versus


Demand-Constrained Systems. Econometrica, 47(4),
801-819.

Kornai, J. (1980). Economics of Shortage. Amsterdam:


North-Holland.

Kornai, J. (1986). The Soft Budget Constraint. Kyklos,


39(1), 3-30.

Kornai, J., Maskin, E & Roland, G. (2003). Under-


standing the Soft Budget Constraint. Journal of
Economic Literature, 41(4), 1095-1136.

Onaran, Y. (2008, August 12). Banks’ Subprime Losses


Top $500 Billion on Writedowns. Retrieved December
12, 2008, <http://www.bloomberg.com/apps/news?pi
d=20670001&sid=a8sW0n1Cs1tY>

Penner, E. (2008, April 11). Our Financial Bailout


Culture. The Wall Street Journal, p. A17


Compact Car Regenerative Drive Systems:
Electrical or Hydraulic
QUINN LAI
School of Mechanical Engineering
57
Georgia Institute of Technology

The objective of the research is to address the power density issue of electric hybrids and
energy density issue of hydraulic hybrids by designing a drive system. The drive system
will utilize new enabling technologies such as the INNAS Floating Cup pump/pump
motors and the Toshiba Super Charge Ion Batteries (SCiB). The proposed architecture
initially included a hydraulic-electric system, where the high power braking power is
absorbed by the hydraulic system while energy is slowly transferred from both the Inter-
nal Combustion Engine (ICE) drive train and the hydraulic drive train to the electric
accumulator for storage. Simulations were performed to demonstrate the control meth-
od for the hydraulic system with in-hub pump motors. Upon preliminary analysis it is
concluded that the electric system alone is sufficient. The final design is an electric system
that consists of four in-hub motors. Analysis is performed on the system and MATLAB
Simulink is used to simulate the full system. It is concluded that the electric system has
no need for a frictional braking system if the Toshiba SCiBs were used. The regenerative
braking system will be able to provide an energy saving from 25% to 30% under the
simulated conditions.

Advisor:
Wayne J. Book
School of Mechanical Engineering
Georgia Institute of Technology
Spring 2010: The Tower
58
INTRODUCTION foundation of further calculations in other parts. The
With around 247 million on-road vehicles traveling initial approach to solve the problem was to incorpo-
around 3 trillion miles (Highway Statistics, 2009) ev- rate an electrical system in an existing hydraulic hybrid
ery year, the efficiency of on-road vehicles are of major system. Hydraulic Hybrid Drive System presents the
concern. As a result, hybrid drive trains which dramati- Hydraulic Hybrid system engineering level analysis,
cally increase urban driving efficiency of vehicles have and Hydraulic Accumulator Analysis investigates the
been developed and implemented in vehicles. Existing hydraulic accumulator. It was confirmed that the hy-
on-the-road hybrids have their secondary regenerative draulic accumulator does not have a sufficient energy
systems (Electric Motors and batteries) installed on density for braking energy capture and therefore elec-
their primary drive trains (ICE drive train) to provide trical accumulators were introduced to capture the ac-
the regenerative braking capability. Recently efforts cess energy. Battery Analysis investigates the electrical
have been put in designing drive train systems that have accumulators. Upon the completion of Analysis, it was
either hydraulic or electric components as integral parts concluded that the electrical system alone is sufficient.
of the systems. For example, in the Chevy Volt, a series As a result, an in-hub motor driven electric drive system
electric hybrid system, the ICE is used to charge the (Figure 7) was chosen, and the analysis and simulations
electric accumulator, which in turn drives the electric are presented in Electrical Systems of the paper.
motor (Introducing Chevrolet, 2009).
Electric hybrid drive trains have been implemented in braking power analysis
passenger vehicles while hydraulic hybrids have been Braking power analysis is performed to serve as a foun-
implemented in commercial vehicles. Since electric dation for accumulator analysis in Hydraulic Accumu-
hybrid systems can operate quietly, enhancing passen- lator Analisis and Battery Analysis. Analysis is conduct-
ger comfort, this system is implemented in passenger ed with an assumption of negligible rolling friction, air
vehicles. However, current battery technologies in the drag and other losses. The driving analysis is performed
market prevent high power charging and thus prevent on a mid size passenger sedan, such as a Honda FCX
the electric system from replacing frictional brakes. As Clarity. The Honda FCX Clarity fuel cell car was se-
a result a significant amount of braking energy is lost lected because the weight of the components in the car
to the surroundings through heat. Hydraulic hybrids, in closely resembles the weight of the suggested drive sys-
contrast, have the ability to capture most of the braking tem. The assumed vehicle mass is 1625 kg (Honda at the
power. However, due to the characteristics of hydraulic Geneva, 2009). The ECE-15 Driving Schedule is shown
components, the hydraulic systems suffer from the accu- in Figure 1.
mulator’s low energy density; the Noise, Vibration and
A 6 second 35 mph to 0 mph deceleration is assumed.
Harshness (NVH) also significantly affect the driving
The assumed braking slope resembles a rapid urban brak-
experience.
ing is more rapid than ECE-15 Urban Driving Schedule
In an attempt to address the charging power density braking. A rapid 60 mph to 0 mph deceleration is also
challenge faced by Electrical Hybrids and the energy assumed. Under normal driving conditions, a passenger
density challenge faced by Hydraulic Hybrids, different vehicle will take about 200 ft to decelerate (2009 Driv-
drive systems were designed. Braking Power Analysis of er’s Manual, 2009). The deceleration time involved can
the report presents simple braking power analysis as the be obtained using Equation (1) and Equation (2)

Article: lai
59

Figure 1. ECE-15 Driving Schedule; x-axis: time (s); y-axis: vehicle velocity (mph).

(1) where KE is the kinetic energy of the vehicle, m is the


mass of the vehicle, and v is the velocity of the vehicle.
v f 2 = vi 2 + 2as The braking power can then be determined using Equa-
tion (4)
where is the final velocity of the vehicle, is the initial (4)
velocity of the vehicle, a is the acceleration of the vehi- d ( KE ) dv
cle, and s is the displacement involved in the accelera- P= = mv = mva
tion. The time is obtained using Equation (2) dt dt
(2) where P is the power involved with the braking. The de-
v f = vi + at celeration details calculated are tabulated in Table 1.

where t is the deceleration time. Using the provided


Hydraulic hybrid drive system
equation, we obtained 11 seconds as the deceleration
The initial approach to solve the problem was to in-
time. The energy dissipated can be obtained using Equa-
corporate an electrical system in an existing hydraulic
tion (3)
hybrid system. The selected hydraulic system is the IN-
(3) NAS HyDrid. The architecture of the HyDrid system
(Achten, 2007) is presented in Figure 2.
1
KE = mv 2 INNAS claimed that 77 Miles Per Gallon (MPG) is
2
possible for the HyDrid system (HyDrid, 2009) because

Spring 2010: The Tower


60

Figure 2. HyDrid Drive system adapted from Achten.

the secondary power plant allows engine off operation, stant. During braking, the pump (CPR side) stroke is
and the Infinitely Variable Transmission (IVT) allows kept constant while the pump motor stroke is varied
the engine to rotate at optimum RPM for efficiency. The to charge up the accumulator. The presented control
INNAS HyDrid utilizes the INNAS Hydraulic Trans- method is presented in Figure 3.
formers (IHT) (Achten, 2002) in a Common Pressure
A simulation is performed to demonstrate the control
Rail (CPR) (Vael & Acten, 2000). The IHT is claimed
method. The system assumes that the vehicle has a 4 cyl-
to have unmatched efficiency due to the Floating Cup
inder gasoline engine, a 0.85 volumetric efficiency for
Principle that it utilizes. The starting torque efficiency,
variable pump/motor, and a 0.92 volumetric efficiency
according to Achten, is up to 90% (Achten, 2002) or
for constant displacement pumps and inactive pressure
above. The control method of the HyDrid is not pub-
accumulators. It is also assumed ideal pipe lines and no
lished; therefore, a possible control method is presented
force is involved in the varying of the pump stroke. The
to demonstrate how the IHT functions as an IVT and
simulation shows how by varying the IHT pump stroke
thus converting the varying pressure from the accumu-
the vehicle speed can closely follow a desired trajectory
lator into the desired pressure for the in wheel constant
with minimal ICE rpm variation. The ICE rpm and
displacement motor/pumps. When accelerating, ei-
the pump stroke variation are shown in Figure 4 and 5
ther the pressure accumulator or the ICE will provide
the required pressure in the CPR which will in turn be
transmitted by the IHT to drive the in-wheel constant vi(m/s) vf(m/s) t(s) Braking Power (kW)
displacement motor/pump. The IHT is assumed to be
35 0 6 66.0
a variable pump coupled with a variable pump/motor.
A possible method of controlling the acceleration is to 60 0 11 99.8
vary the stroke of the variable pump in the IHT while Table 1. Deceleration details for the assumed vehicle.
keeping the pump motor stroke and the ICE RPM con-

Article: lai
61

Figure 3. Suggested Control Method for HyDrid system.

respectively. The resulting vehicle velocity is shown in tify the energy storage capacity of a typical size hydrau-
Figure 6. lic accumulator for a hydraulic hybrid vehicle so that
the proposed additional battery pack can be correctly
The simulated vehicle velocity closely matches with the
sized. A 38L EATON hydraulic accumulator (Product
desired velocity trajectory, which is the ECE-15 driving
Literature, 2009) is assumed (Used in CCEFP Test Bed
schedule (Figure 1). The simulated velocity trajectory is
3: Highway Vehicles). The parameters used for energy
idealized because of the idealized assumptions made in
calculations are tabulated in Table 2.
creating the simulation model. The pressure values pro-
vided from the simulation are also observed to be faulty.
This simulation’s values cannot be used for quantitative
purposes. However, it is sufficient for the demonstra- Volume (m3) 0.038
tion of the variation between the stroke of the pump in Precharge Pressure (MPa) 10.7
the IHT and the vehicle velocity.
Precharge Nitrogen Volume (m3) 0.038
Maximum Nitrogen Pressure (MPa) 20.6
hydraulic accumulator analysis
Nitrogen Volume at Maximum Pressure (m ) 3
0.0176
The hydraulic accumulator has sufficient power density
but a low energy density. An attempt was made to quan- Table 2. EATON 38 L hydraulic accumulator.

Spring 2010: The Tower


62

Figure 4. Engine RPM for HyDrid simulation; x-axis: time (s); y-axis: ICE rpm.

Figure 5. Pump stroke variation for HyDrid simulation; x-axis: time (s); y-axis stroke (m).

Article: lai
63
The assumed relationship between pressure and volume involved. Using Equation (5) and Equation (6) we can
is shown in Equation (5) calculate the total energy storage of the EATON 38L
pressure accumulator, which is 293.6 kJ. Using Equa-
(5) tion (3) and assuming a vehicle with the weight of 1625
n
pV = constant kg (from Braking Power Analysis), a 38L accumulator is
sufficient for the acceleration from 0mph to 42.5mph.
where p is the pressure of the nitrogen in the accumula- It is assumed that no energy is lost due to friction, drag,
tor, V is the volume of nitrogen in the accumulator, and and inertia changes. As the main purpose of the hydrau-
n is an empirical constant. Using this relationship, the lic system in a Hydraulic Hybrid is to capture urban
total energy involved in completely pressurizing or de- braking and to accelerate the car to a velocity where
pressurizing the accumulator is shown in Equation (6) the ICE can be started, 293.6 kJ is sufficient. However,
(6) if the vehicle is braking from a speed higher than 42.5
mph, or the duration of braking is long, the hydraulic
p f V f - piVi system will not be able to capture the braking energy.
W=
1- n Therefore an electrical system is introduced to capture
where is the initial pressure, is the final pressure, is the the excess energy.
initial volume, is the final volume, and W is the energy

Figure 6. Vehicle velocity variation for HyDrid simulation; x-axis: time (s); y-axis: velocity (mph).

Spring 2010: The Tower


64
SCiB Cell LFP
Nominal Voltage (V) 2.4 3.2
Nominal Capacity (Ah) 4.2 1.1
Size (mm) approx 62 x 95 x13 d=18, h=65
Weight (g) approx 150 40
Charging time 90% in 5 min 99% in 30 min
Table 3. Sony LFP and Toshiba SCiB cell spec.

battery analysis As shown in Table 4, the Sony LFP outperforms the


The batteries are proposed to serve as secondary energy Toshiba SCiB in terms of energy density by a factor of
storage which will capture excess energy that cannot be 1.3, while the SCiB outperforms the LFP in terms of
captured by the hydraulic system. The two electric ac- power density by a factor of 3. As the major limitation
cumulators analyzed are the Sony Olivine-type Lithium in Electric Hybrid systems is the charging power density
Iron Phosphate (LFP) cells (Sony Launches High-Pow- of batteries, the SCiB cell is used for further analysis.
er, 2009) and the Toshiba SCiB cells (Toshiba to Build,
As calculated in Braking Power Analysis, 66.0 kW is the
2009). Both cells exhibit impressive recharge cycles and
maximum braking power occurred in a 6 second 35mph
high charging power density. Cell specifications are
to 0 mph city braking. Assuming that all the SCiB cells
shown in Table 3.
used are arranged in such a way that the power density is
The charging power density can be obtained using equally shared among the cells and ideal electronic com-
Equation (7). ponents, 90.8 kg of the SCiB cells are required. 90.8 kg
of SCiB has a total capacity of 22,000 kJ. According to
(7)
Braking Power Analysis calculations, each city braking
(%Charge)·(Energy Density) involves 197.7 kJ braking energy, which is 0.9% of the
Charging power density=
tcharging battery pack’s capacity. According to Toshiba, the capac-
ity loss after 3,000 cycles of rapid charge and discharge
where is the charging time for one cell.(fix this sentence) is less than 10% (Toshiba to Build, 2009). Using the as-
Energy density can be described using Equation (8). sumptions in Braking Power Analysis and assuming that
the 10% of capacity loss after 3000 cycles is negligible,
(8)
we can obtain 334 thousand cycles as an approximate
C ·V ·60 min·60 sec
Energy density=
tcharging
SCiB Cell LFP
where C is the cell capacity in Ah, V is the nominal volt- Energy Density (kJ/kg) 242 316.8
age, and is the mass of one cell. The energy density and Charging Power Density (W/kg) 726 198
charging power density values obtained are tabulated in
Table 4. Table 4. Sony LFP and Toshiba SCiB cell spec (charging power
density and energy density).

Article: lai
65
(Achten, 2009; Valøena & Shoesmith, 2009) are pro-
vided in Table 5 for comparison. The 4 in-hub motor de-
sign also allows the vehicle to enjoy a very small turning
radius and other advantages of 4WD vehicles, such as
increased traction performance and precision handling.
To validate the design, a simulation is done for the sys-
tem suggested. Because of the mechanical components
removed, a lighter car is selected for simulation. The
selected vehicle is a Honda Civic, with a vehicle mass
Figure 7. Selected Electric Car architecture. of 1246 kg (Complete Specifications, 2009) and a CdA
value of 0.682 m2. The air drag of the vehicle can be cal-
for the regenerative braking and driving cycles allowed culated using Equation (9) (Larminie & Lowry, 2003)
before the capacity of the SCiB drops below 90%. Us- (9)
ing the same assumptions we can also find that 137.6 kg
1
of SCiB is sufficient for the maximum charging power Fad = ρ Cd Av 2
involved in the 11 second 60 mph to 0 mph highway ac- 2
cident braking. The weight of the battery pack required
is slightly heavier than the 70kg battery pack in a Toyota where is the drag force, is air density, is the drag coef-
Prius electric hybrid vehicle. ficient, A is the cross sectional area of the vehicle facing
the front, and v is the velocity of the vehicle. The rolling
electrical systems friction of the vehicle can be obtained using Equation
As shown in Battery Analysis calculations, the Toshiba (10) (Larminie & Lowry, 2003).
SCiBs have a power density that is more than sufficient
for regenerative braking. As a result neither the hydrau- (10)
lic system nor the frictional braking system is necessary Frr = µrr mg
in an electric vehicle equipped with the Toshiba SCiBs.
A mechanical emergency brake should be installed to
prevent accidents in case of regenerative braking system
failure. Approx. Efficiency
The possible simplest design is a plug in electric or a fuel ICE 20%
cell vehicle that has 4 in-hub motors. The simplified Transmission (automatic) 85%
system is shown in Figure 7. As shown in the Figure 7, Transmission (manual) 92% to 97%
the 4 in-hub wheel motors are directly connected to the
Differential 90%
wheel. With mechanical components such as the ICE,
differentials, and the transmission removed, the vehicle Motor 90%
weight can be reduced, and the efficiency of the whole Battery recharge 80% to 90%
driving train can be increased by at least a factor of 3
Table 5. Efficiency values of integral components of ICE
(Clean Urban Transport, 2009). Some efficiency values drive train and electric drive train.

Spring 2010: The Tower


66
where is the rolling friction force, is the rolling friction As expected, the power stays below a value of 66kW,
coefficient, which is assumed to be 0.015 (for radial ply and there are negative values in the time versus power
tire) (Larminie & Lowry, 2003), m is the mass of the ve- plot as deceleration is involved in the ECE 15 Driving
hicle, and g is the acceleration due to gravity. As in-hub Schedule. Because of the losses, the negative peaks that
motor specifications are not readily available, the 4 in- correspond to braking power have a magnitude that
hub motors are assumed to resemble the Tesla Roadster is relatively smaller than the positive peaks that corre-
drive train system (2010 Tesla Roadster, 2009), which spond to accelerating power. From the simulation, the
only involves a motor and a fix gear set with a gear train highest braking power observed is a little over 10 kW,
ratio of 8.28. The 375 V AC motor has a 215kW peak which can easily be fully captured using the 70kg of
power and 400 Nm of torque. All the parameters, in- SCiB battery packs (refer to Battery Analysis for cal-
cluding Equation (9) and Equation (10), are included culations). Simulink scopes are added to the system to
in the simulation. observe the energy change with and without regenera-
tive braking systems. The results are shown in Figure 10
The model simulates the vehicle attempting to follow
and Figure 11.
the ECE Driving Schedule shown in Figure 1. The con-
troller assumed is a PID controller with a Kp of 10.4, Comparing Figures 9 and 10, we can observe a 25%
Ki of 0.546, and a Kd of -0.386 (MATLAB Simulink energy saving for the system with regenerative braking.
tuned for 2.06 second response time). The resultant ve- Tuning the PID controller can increase the energy sav-
locity trajectory and the power variation are shown in ings value up to 30%. Using a better controller has the
Figure 8 and Figure 9 respectively potential to increase the energy savings value further.

Figure 8. Vehicle Velocity Variation for Electric Drive System; x-axis: time (s); y-axis: velocity (mph).

Article: lai
67

Figure 9. X-axis: time (s); y-axis: power(W) plot for Electric Drive System Simulation.

Figure 10. Energy required to complete the ECE 15 driving cycle without regenerative braking; x-axis: time(s); y-axis: energy( J).

Spring 2010: The Tower


68
recommended future work tem should overcome is the energy density issue of the
The hydraulic drive system may deserve a more in depth batteries. The energy density of a battery is significantly
analysis. If the accumulator energy density can be im- lower than gasoline. Efforts should also be invested in
proved, the hydraulic drive system may be a more en- technologies related with battery recycling. (this is not
vironmentally friendly option than its electrical coun- your future work)
terparts as the production and disposal of batteries
leave detrimental effects to the environment. It is rec-
conclusion
ommended that the simulation model be improved to
An attempt was made to design a compact car drive
better model the actual response of the HyDrid system.
system to address the charging power density challenge
For the electrical system, efforts should be invested in
faced by electric hybrid vehicles and the charging energy
the research of in-hub motors, which produce signifi-
density challenge faced by hydraulic hybrid vehicles. The
cantly less torque than a regular AC motor coupled
initial approach to solve the problem was to incorporate
with an 8.28 to 1 gear ratio gear train. Parameters
an electrical system in an existing hydraulic hybrid sys-
within the Simulink model should be selected to bet-
tem. The INNAS HyDrid was used as the foundation
ter represent in-hub motors and the batteries should be
architecture for analysis. Simulations were performed
modeled with greater detail, as different arrangements
to understand the control dynamics of the HyDrid. Af-
of the cells will result in different power density. Losses
ter performing quantitative analysis on hydraulic accu-
involved with the electrical components should also be
mulators it was confirmed that hydraulic accumulators
investigated. One challenge that the electric drive sys-
cannot provide a sufficient energy density for braking

Figure 11. Energy required to complete the ECE driving cycle with regenerative braking; x-axis: time(s); y-axis: energy( J).

Article: lai
69
energy storage. In one of the intermediate designs, elec-
trical accumulators were introduced into the system to
capture excess energy that cannot be captured by the
hydraulic system. The Sony LFP and the Toshiba SCiB
were considered. The Toshiba SCiB was chosen as a re-
sult of its superior charging power density performance.
Upon further analysis, it was concluded that the batter-
ies have a sufficient charging power density to capture
braking power. It was then suggested that the electric
system can fully replace the hydraulic components, the
ICE drive train, and the frictional braking system. With
the convoluted hybrid system, which consists of a lot
of inefficient components replaced by a simple electric
only drive train, the vehicle drive train efficiency can be
increased. An electrical system is simulated. The simu-
lated models showed energy savings of around 25~30%
with regenerative braking. The final drive system design
consists of an electric/fuel cell vehicle with four in-hub
motors.

Spring 2010: The Tower


70
References
Highway Statistics. (2007). Washington, D.C.: Federal Sony Launches High-power, Long-life Lithium Ion
Highway Administration Retrieved from http://www. Secondary Battery Using Olivine-type Lithium Iron
fhwa.dot.gov/policyinformation/statistics/2007/. Phosphate as the Cathode Material. (2009). News Re-
leases Retrieved December 5, 2009, from http://www.
Introducing the Chevrolet Volt. (2010) Retrieved sony.net/SonyInfo/News/Press/200908/09-083E/
December 5, 2009, from http://www.chevrolet.com/ index.html
pages/open/default/future/volt.do
Toshiba to Build New SCiB Battery Production
Honda at the Geneva Motor Show. Retrieved Facility. (2008). News Releases Retrieved Decem-
December 5, 2009, from http://world.honda.com/ ber 5, 2009, from http://www.toshiba.co.jp/about/
news/2009/c090303Geneva-Motor-Show/ press/2008_12/pr2401.htm

Larminie, J., & Lowry, J. (2003). Electric Vehicle Tech- Berdichevsky, G., Kelty, K., Straubel, J.B., & Toomre,
nology Explained. Chichester: John Wiley & Sons Ltd. E. (2006). The Tesla Roadster Battery System. In Tesla
Motors (Ed.).
Driver’s Manual. (2009). Government of Georgia. Electric Vehicles. (2009). Retrieved from http://
Retrieved from http://www.dds.ga.gov/docs/forms/ ec.europa.eu/transport/urban/vehicles/road/elec-
FullDriversManual.pdf. tric_en.htm.

Acten, P.A.J. (2007). Changing the Paradigm. Paper Valøen, L.O., & Shoesmith, M.I. (2007). The ef-
presented at the Proc. of the Tenth Scandinavian Int. fect of PHEV and HEV duty cycles on battery and
Conf. on Fluid Power, SICFP’07, Tampere, Finland. battery pack performance. Paper presented at the
Plug-in Hybrid Electric Vehicle 2007 Conference,
HyDrid. Retrieved December 5, 2009, from http:// Winnipeg, Manitoba. http://www.pluginhighway.ca/
www.innas.com/HyDrid.html PHEV2007/proceedings/PluginHwy_PHEV2007_
PaperReviewed_Valoen.pdf
Achten, P.A.J. (2002). Dedicated design of the hydrau-
lic transformer. Paper presented at the Proc. IFK.3, Complete Specifications. Civic Sedan Retrieved De-
IFAS Aachen. cember 5, 2009, from http://automobiles.honda.com/
civic-sedan/specifications.aspx
Vael, G.E.M., Achten, P.A.J., & Fu, Z. (2000). The
Innas Hydraulic Transformer, the Key to the Hydro- Performance Specifications. (2010). Tesla Roadster Re-
static Common Pressure Rail. Paper presented at the trieved December 5, 2009, from http://www.teslamo-
International Off-Highway & Powerplant Congress & tors.com/performance/perf_specs.php
Exposition, Milwaukee, WI, USA.

Accumulators Catalog. (2005). In Eaton (Ed.),


Vickers.

Article: lai
Switchable Solvents: A Combination of
Reaction & Separations
GEORGINA W. SCHAEFER
School of Chemical and Biomolecular Engineering
71
Georgia Institute of Technology

A switchable solvent is a solvent capable of reversing its properties between a non-ionic


liquid to an ionic liquid which is highly polar and viscous. Switchable solvents have
applications for the Heck reaction, which is the chemical reaction of an unsaturated
halide with an alkene in the presence of a palladium catalyst to form a substituted alk-
ene. The objective of this research was to apply a switchable solvent system to the Heck
reaction in order to optimize reaction and separations by eliminating multiple reaction
steps. Switchable solvents reduce the need to add and remove multiple solvents because
they are capable of switching properties and dissolving both the inorganic and organic
components of the reaction. This reversal of chemical properties by a switchable solvent
provides for easier separation of the product, minimizes the cost by eliminating the need
for multiple solvents, and reduces the overall environmental impact of the industrial
process. Specifically, the cost is lowered by the ability of the catalyst and solvent to be
recycled from the system. In addition, the “switch” that initiates the formation of the
ionic liquid switchable solvent is carbon dioxide, which is cheap and nontoxic. In con-
clusion, we were able to use a switchable solvent system to obtain good product yields of
E-Stilbene, the desired product of the Heck reaction, and recycle the remaining catalyst
+ solvent which also produced good product yields at a lower economic and environ-
mental cost.

Advisor:
Charles A. Eckert
School of Chemical and Biomolecular Engineering
Georgia Institute of Technology
Spring 2010: The Tower
72
INTRODUCTION achieve different solvent properties (Heldebrant et al.,
A common problem for chemical synthesis is the re- 2005; Phan et al., 2007).
action of an inorganic salt with an organic substrate,
Industrial chemical production usually requires multi-
which is an important reaction in the production of
ple reaction and separation steps, each of which usually
many industrial chemicals and pharmaceutical prod-
requires the addition and subsequent removal of a dif-
ucts. Typically, a phase transfer catalyst (PTC), such as
ferent solvent for each step. For example, the synthesis
a quaternary ammonium salt, is used and must subse-
of Vitamin B12 is achieved in 45 steps. The application
quently be separated from the product after the reaction
of switchable solvent systems to industrial produc-
has proceeded. However, the separation of a PTC from
tion processes of major chemicals and pharmaceuticals
the product is very difficult. In fact, solvents such as di-
would significantly lower the associated pollution and
methyl sulfoxide (DMSO) or ionic liquids, liquid salts
cost of these processes by eliminating the need to add
at or near room temperature, that are capable of dissolv-
and remove multiple solvents for each reaction step.
ing both the organic and inorganic components of the
(Heldebrant et al., 2005; Phan et al., 2007).
reaction still inhibit simple separation of the product
from the catalyst. (Heldebrant et al., 2005)
Project description
Now imagine a smart solvent that can reversibly change
Switchable solvents convert between a non-ionic liquid,
its properties on command through a built in “switch”.
which has varying polarity, to an ionic liquid whose
Our goal in designing such a solvent is to minimize the
properties include higher polarity and higher viscosity.
economic and environmental impact of such industrial
As discussed in previous research, the ideal properties
processes while creating a solvent that remains highly
of the solvent as a reaction medium include a usable
polar. These solvents are able to dissolve both the or-
liquid range, chemical stability, and the ability to dis-
ganic and inorganic components of the reaction while
solve both organic species and inorganic salts. In terms
highly polar and then change properties for easier sepa-
of the solvent’s role in separations, the solvent should
ration and effective product isolation after the reaction
be decomposable at moderate conditions with a reason-
is complete.
able reaction rate, the decomposition products should
Switchable solvent systems are capable of doing just have very high or very low vapor pressures, and recom-
that. These systems involve a non-ionic liquid, an alco- bination to form solvent should be relatively easy. Our
hol and amine base, which can be converted to an ionic principle aims in designing a switchable solvent system
liquid upon exposure to a “switch”. The switch chosen to optimize reactions and separations were to eliminate
to induce this change in solvent properties is carbon multiple reaction steps, reverse solvent properties to
dioxide. CO2 reacts with the alcohol-amine mixture facilitate better separations, and minimize the cost and
to form an ammonium carbonate. Furthermore, it is environmental impact by optimizing catalyst and sol-
cheap, readily available, benign, and easily removed by vent recycle. (Heldebrant et al., 2005; Xiao, Twamley,
heating and purging with nitrogen or argon. Switchable & Shreeve, 2004; Phan et al, 2007).
solvent systems therefore should facilitate chemical syn-
Ionic liquids have gained popularity in their technolog-
theses involving reactions of inorganic salts and organic
ical applications as electrolytes in batteries, photoelec-
substrates by eliminating the need to add and remove
trochemical cells, and many other wet electrochemical
different solvents after each synthetic step in order to

Article: Schaefer
73
devices. They are particularly attractive solvents because
of dramatic changes in properties such as polarity, which
may be elicited through a “switch”. On the other hand,
changes in conditions such as temperature and pressure
usually can only elicit negligible to moderate changes
in a conventional solvent’s properties making the use of Figure 1. The Heck reaction of bromobenzene and styrene in
multiple solvents for a single process necessary. In addi- the presence of palladium catalyst and ionic liquid.
tion, ionic liquids have low vapor pressures essentially
eliminating the risk of inhalation. In particular, guani-
dinium-based ionic liquids have low melting points and
good thermal stability, properties which make these
high nitrogen materials attractive alternatives for ener-
getic materials. (Xiao, Twamley, & Shreeve, 2004; Gao,
Arritt, Twamley, & Shreeve, 2005).
Figure 2. Synthesis of the palladium catalyst used in the Heck
Our research focused on the application of switchable Reaction (Figure 1).
solvents for the Heck reaction in order to optimize the
reaction and separation. Specifically, we studied the re-
action of bromobenzene and styrene in the presence of
a palladium catalyst (PdCl2(TPP)2) and base to form
E-stilbene, an important pharmaceutical intermediate
in the production of many anti-inflammatories. (Hel-
debrant, Jessop, Thomas, Eckert, Liotta, 2005; Xiao,
Twamley, & Shreeve, 2004).
As illustrated in Figure 3, the nonionic liquid can be
Figure 3. Ionic liquid formation; note the reversibility of this re-
“switched” to an ionic liquid by exposure to carbon diox- action under Argon.
ide and reversed back to a non-ionic liquid by exposure
to argon or nitrogen. The reaction of bromobenzene
and styrene in the presence of palladium catalyst is run
in the highly polar ionic liquid which is able to dissolve
both the organic and inorganic components of this sys-
tem. The ionic liquid is a particularly effective media for
this reaction in that it is able to immobilize the palla-
dium catalyst while preserving the overall product yield.
Figure 4. Switchable solvent system comprised of 1,8-diazabi-
In addition, ionic liquids are nonvolatile, inflammable, cyclo-[5.4.0]-undec-7-ene (DBU) and hexanol.
and thermally stable making them an attractive replace-
ment for volatile, toxic organic solvents. (Heldebrant,
Jessop, Thomas, Eckert, Liotta, 2005; Xiao, Twamley, &
Shreeve, 2004; Phan et al., 2007).

Spring 2010: The Tower


74
In a previous study, a guanidine acid-base ionic liquid Project description
was determined to be an effective media for the palla- Once the reaction is run in the ionic liquid, the homo-
dium catalyzed Heck reaction of bromobenzene and geneous product-ionic liquid solution is extracted and
styrene [4]. The guanidine acid-base ionic liquid was separated into a two phase product, catalyst/ionic liquid
used as both the solvent, ligand, and base in this reac- system. After the product is removed by extraction with
tion with the guanidine acting as a strong organic base heptane, the system is exposed to argon and the ionic
able to complex with the Pd(II)salt and as a replacement liquid is reversibly switched back to a nonionic liquid
for a phosphine ligand. High activity, selectivity, and re- where the system again separates into a two phase salt
usability were all observed under moderate conditions precipitate byproduct/catalyst and nonionic liquid so-
and thus we expect our system to function best at simi- lution system. In this final stage the catalyst and solvent
lar conditions. (Li, Lin, Xie, Zhang, Xu, 2006; Gao, Ar- may be removed and recycled back into the process.
ritt, Twamley, & Shreeve, 2005).
Our studies of the palladium catalyzed Heck reaction
of bromobenzene and styrene were performed in a

Figure 5. Process Diagram for the Heck reaction performed in a switchable solvent system.

Article: Schaefer
75
10ml window autoclave. First the catalyst solution was results and conclusions
added and the solvent vacuumed out. Next the ionic The Heck reaction was optimized by running at various
liquid, bromobenzene, and styrene were added, and temperature and pressure conditions. In order to assess
the system was pressurized. The autoclave was then left the success of each system, the conversion percentages
stirring and heating for three days until the reaction were compared.
was completed. After three days, the autoclave was al-
Based on Figure 4, the optimal conditions for product
lowed to cool down and depressurize. Once the system
formation and catalyst + solvent recovery were a tem-
was back to room temperature and atmospheric pres-
perature of 115˚C and a pressure of 30 bar. The reac-
sure, the homogeneous ionic liquid/product/catalyst
tions run at these conditions show very acceptable and
solution was extracted from the autoclave with heptane
repeatable results, such as an 83% and an 84% yield,
under carbon dioxide in order to sustain the formation
the highest observed percent yields. However, the high
of the ionic liquid. The product in heptane phase was
variability of the product yield results at these condi-
then removed and the remaining ionic liquid/catalyst
tions can be explained by the poor extraction methods
phase was exposed to argon and heat. After exposure to
for this system. Extracting with large amounts of hep-
argon and heating, the byproduct salt precipitated out
tane (greater than 50mL) led to higher product yields
of the catalyst/reversed non-ionic solvent solution. The
(greater than 60%) whereas extracting with less heptane
catalyst and solution was then removed from the salt
(10-50mL) led to lower product yields at these condi-
byproduct and recycled. Any remaining product left
tions due to product loss in the system. Therefore, there
in the autoclave from the extraction with heptane was
was a tradeoff between the amount of heptane used in
then extracted with dichloromethane (DCM) for later
the extraction to recover the product, which adds to the
mass balance calculations.

Figure 6.
Palladium (PdCl2(TPP)2) cata-
lyzed Heck reaction of bromoben-
zene and styrene in a switchable
ionic liquid system. The catalyst
and ionic liquid solution used in
the reaction run at T=115˚C and
P=30 bar, which had 55% yield,
was recycled from the reaction
run at T=115˚C and P=30 bar,
which had 83% yield, demonstrat-
ing that the catalyst in ionic liquid
remained active in the reaction
and that was good recycle.

Spring 2010: The Tower


76
overall cost of the process, and the amount of product method, different solvents could also be tried in this
we were able to recover from the system. extraction. In addition, it was often difficult to extract
all of the products and reactants from the autoclave due
The catalyst in solvent was also extracted from the reac-
to the viscosity of the ionic liquid. A better extraction
tion which had an 83% product yield and recycled for a
solvent would facilitate this step and minimize product,
second reaction performed at the same conditions. Ob-
catalyst, and solvent loss. Another difficulty we encoun-
served yield of 55% from the recycle reaction demon-
tered with our system was evaporating the solvent out
strates that the recycled catalyst activity was preserved
of the catalyst solution before adding the other reac-
and that the solvent was able to “switch” back to an
tants. Changing the solvent from chloroform to toluene
ionic liquid a second time to run the reaction success-
was attempted, but toluene was even more difficult to
fully. In general, the reactions which experienced lower
evaporate from the system. The solution we found was
percent yields did so as a result of too harsh reaction
to dissolve the catalyst directly into 1,8-diazabicyclo-
conditions, such as too high temperatures and/or pres-
[5.4.0]-undec-7-ene (DBU) and then, after the cata-
sures. We predict that the palladium catalyst loses ac-
lyst and DBU solution was completely homogeneous,
tivity and perhaps decomposes at higher temperatures
add hexanol and expose to carbon dioxide in order to
and pressures which accounts for the lower yields at
convert it to an ionic liquid and then added it to the
harsher conditions. In the reactions with higher percent
system directly. NMR samples of the catalyst in DBU
yields the ionic liquid and catalyst extracted from the
and hexanol were taken to test that the activity of the
system were usually light yellow in color, whereas the
catalyst was preserved in this solution by confirming
reactions with lower percent yields were brown or black
the catalyst’s molecular structure remained unchanged.
indicating catalyst decomposition. In particular, two
In addition, the percent yields of the reactions run with
of the reactions run at T=140˚C and P=50 bar which
this method confirm that the palladium catalyst retains
had product yields of 6% and 9% respectively contained
its activity in the DBU-hexanol solution. This method
black particles within the catalyst and solvent mixture.
also eliminates the addition of an extra solvent and its
In addition to the extremely low percent yield, we be-
difficult removal from the system.
lieve the palladium catalyst complex decomposed, and
the black particles were possibly palladium nanopar- In conclusion, the application of switchable solvents to
ticles (palladium black). the Heck reaction of bromobenzene and styrene was
successful in optimizing product (E-Stilbene) yield and
In order to improve this system, further investigations
catalyst + solvent recovery for a good recyclable system.
into the extraction methods should be performed in
E-Stilbene is an important intermediate in the synthesis
order to find a work-up method which minimizes prod-
of some pharmaceuticals such as anti-inflammatories
uct loss by maximizing the amount of product that can
and is used in the manufacture of dyes and optical
be washed out of the product phase. The variability of
brighteners. A switchable solvent system could there-
the product yield results are a reflection of this difficult
fore be implemented for chemical synthesis of Heck
extraction procedure. In our work, we extracted with
reaction products in order to reduce the economic and
heptane (between 10 and 50mL) because of its low
environmental impact of this industrial process. (Hel-
boiling point, which made it easy to remove the left-
debrant, Jessop, Thomas, Eckert, Liotta, 2005; Heldeb-
over heptane in the system after the product had been
rant et al., 2005).
extracted by heating. In order to improve the extraction

Article: Schaefer
77
References
Heldebrant, D.J.; Jessop, Philip G; Thomas, Colin A.;
Eckert, Charles A.; Liotta, Charles L. J. The Reaction
of 1,8-Diazobicyclo[5.4.0]undec-7-ene (DBU) with
Carbon Dioxide. Organic Chemistry 2005, 70, 5335-
5338.

Heldebrant, D.J.; Jessop, Philip G; Li, Xiaowang; Eck-


ert, Charles A.; Liotta, Charles L. Green Chemistry-
Reversible nonpolar-to-polar solvents. Nature 2005,
Vol. 436, 1102.

Xiao, Ji-Chang; Twamley, Brendan; Shreeve, Jean’ne


M. An Ionic Liquid-Coordinated Palladium Complex:
A Highly Efficient and Recyclable Catalyst for the
Heck Reaction. Organic Letters 2004, Vol. 6, No. 21,
3845-3847.

Li, Shenghai; Lin, Yingjie; Xie, Haibo; Zhang, Suobo;


Xu, Jianing. Brønsted Guanidine Acid-Base Ionic
Liquids: Novel Reaction Media for the Palladium-
Catalyzed Heck Reaction. Organic Letters 2006, Vol.
8, No. 3, 391-394.

Phan, Lam; Chiu, Daniel; Heldebrant, David J.; Hut-


tenhower, Hillary; John, Ejae; Li, Xiaowang; Pollet,
Pamela; Wang, Ruiyao; Eckert, Charles A.; Liotta,
Charles L.; Jessop, Phillip G. Switchable Solvents Con-
sisting of Amidine/Alcohol or Guanidine/Alcohol
Mixtures. American Chemical Society 2007.

Gao, Ye; Arrit, Sean W.; Twamley, Brendan; Shreeve,


Jean’ne M. Guanidinium-Based Ionic Liquids. Inor-
ganic Chemistry 2005, Vol. 44, No. 6, 1704-1712.

Spring 2010: The Tower


78
Submission guidelines
The Tower accepts papers from all disciplines offered at narrow in scope, but critical toward his or her overall
Georgia Tech. Research may discuss empirical or theo- research aim.
retical work, including, but not limited to, experimental,
historical, ethnographic, literary, and cultural inquiry. A dispatch should: not be more than 1500 words (not
The journal strives to appeal to readers in academia and including title page and references); and have at least
industry. Articles should be easily understood by bach- the following sections:
elors-educated individuals of any discipline. Although
The Tower will review submissions of highly technical • Introduction/Background Information
research for potential inclusion, submissions must be • Methods & Materials/Procedures
written to educate the audience, rather than simply • Results
report results to experts in a particular field. Original • Discussion/Analysis
research must be well supported by evidence, arguments • Future Work
must be clear, and conclusions must be logical. More • Acknowledgements, as a separate page
specifically, The Tower welcomes submissions under the
following three categories: articles, dispatches, and per- Perspectives
spectives. A perspective reflects active scholarly thinking in
which the author provides personal viewpoints and in-
Formatting vites further discussions on a topic of interest through
Articles literature synthesis and/or logical analysis
An article represents the culmination point of an
undergraduate research project, where the author ad- A perspective should: not be more than 1500 words
dresses a clearly defined research problem from one, or (not including title page and references); address some
sometimes multiple approaches. of the following questions: Why is the topic impor-
tant? What are the implications (scientific, economic,
A properly formatted article must: be between 1500 cultural, etc.) of the topic or problem? What is known
and 3000 words (not including title page, abstract, and about this topic? What is not known about this issue?
references); include an abstract of 250 words or less; Waht are possible methods to address this issue?
and have at least the following sections:
General
• Introduction/Background Information The following formatting requirements apply to all
• Methods & Materials/Procedures types of submissions and must all be satisfied before a
• Results submission will be reviewed. All papers must: adhere to
• Discussion/Analysis APA formatting guidelines as specified in the Publica-
• Conclusion tion Manual of the American Psychological Associa-
• Acknowledgements tion, 5th ed. (Washington, DC: American Psycho-
logical Association, 2001); be submitted in Microsoft
Dispatches Word format; be set in 12-point Times New Roman
A dispatch is a manuscript in which the author reports font, double-spaces; not include identifying informa-
recent progress on a research challenge that is relatively tion (name, professor, department) in the text, refer-


79
Submission guidelines
ence section, or on the title page. Papers will be tracked submitting
by special software that will keep author information To submit a paper, authors must register on our Online
separate from the paper itself; be written in standard Journal System (OJS) at http://ejournals.library.gat-
U.S. English; utilize standard scientific nomenclature; ech.edu/tower. Once the author fills out the required
define new terms, abbreviations, acronyms, and sym- information and registers as an author, he or she will
bols at their first occurance; acknowledge any funding, have access tot he submission page to begin the multi-
collaborators, and mentors; not use footnotes — if step submission process.
footnotes are absolutely necessary to the integrity of
the paper, please contact the AESR at review@gttower. For more detailed submission guidelines, as well as cur-
org; reference all tables, figures, and references within rent deadlines and news, please visit gttower.org.
the text of the document; adhere to the Georgia Insti-
tute of Technology Honor Code regarding plagiarism
and proper referencing of sources; and keep direct quo-
tations to an absolute minimum — paraphrase unless a
direct quote is absolutely necessary.

Deadlines
Submissions are accepted on a rolling basis throughout
the year. The Tower publishes an issue per semester.
Due to the review and production process, for submis-
sions to be considered for each issue they must be
submitted before the publicized deadline, which can
be found at gttower.org. Submissions received after
this deadline will be considered for the following issue.
If the submission quality will be compromised in the
attempt to meet the deadline, authors are encouraged
to further develop their work and only submit it once
it is fully realized.

Eligibility
Submitters must be enrolled as undergraduate students
at the Georgia Institute of Technology to be eligible
for publication. Authors have up to three months after
graduation to submit papers regarding research com-
pleted as an undergraduate. The priciple investigator
may not be included among the co-authors.

Spring 2010: The Tower




IBEW Local 613 Electricians


Do Everything from Wiring Electrical Outlets
to Programmable Logic Controls
Yourelectricalsystemisthelifebloodofyourbusiness.Ifitfails,yourphones
don’tring,computerswon’twork.YourBUSINESSSTOPS.
Don’t take chances with your electrical system.  Demand the best trained
Electrical Professionals available from an IBEW Local Union 613 Electrical
Contractor.IBEWLocal613memberscompletearigorous5Ͳyearclassroom
and onͲtheͲjob training program.  They know how to get your electrical
systembackupandrunningasquicklyaspossible.
PowerUp.CallIBEWLocal613forthenameofQualifiedUnionElectrical
Contractorsnearyou.

InternationalBrotherhoodofElectricalWorkersLocal613
501PulliamStreet,SWSuite250Atlanta,Georgia30312
(404)523Ͳ8107
www.ibew613.org
GeneR.O'Kelley MaxMountJr
BusinessManager President

Advertisements
Spring 2010 : The Tower
We know your eyes are on the future.
So, look at Yokogawa.
We are looking for chemical, electrical and mechanical engineers.
For more information on employment opportunities,
log on to www.yokogawa.com/us

Yokogawa Corporation of America


800-447-9656 www.yokogawa.com/us
800 524 SERV
7 3 7 8

Advertisements
Compliments of

Hoover Foods Inc.

www.hooverfoods.com

Hoover Foods is a franchisee of Wendys International, Inc.

Merial,
a world leading
animal health company,
is a proud
contributor to
Georgia’s growth
in the
biotech sector.

We are dedicated to enhancing the health and well-being of animals.

Spring 2010 : The Tower


Compliments of

A Family Business
for 110 Years
21 East Broad Street
Savannah, GA 31401
912.236.1865
w w w. b a r n h a r d t . n e t
Fax: 912.238.5524

High-Efficiency Air Handlers


Sales, Marketing & Customer Service

2175 West Park Place Blvd.


Stone Mountain, GA 30087

Manufacturing & Engineering Facility

1995 Air Industrial Park Road


Grenada, MS 38901

1-800-848-2270

Advertisements
The World’s Leading Conveyor Belt Company

For internship and career opportunities please


E-Mail renee.speers@fennerdunlop.com

325 Gateway Drive


Lavonia, GA 30535
1-706-356-7607
Fax: 1-706-356-7657
www.fennerdunlop.com

Transportation • Environmental • Planning


Civil • Construction • Program Management
ON SITE ELECTRICAL SERVICE
AND CONSTRUCTION

Parsons Brinckerhoff
3340 Peachtree Rd. NE METROPOWER, INC.
Suite 2400, Tower Place100 1703 Webb Drive
Atlanta, GA 30326
(404) 237-2115
Norcross, GA 30093
www.pbworld.com Phone: 770-448-1076
Fax: 770-242-5800

Spring 2010 : The Tower


Aderans Research is dedicated to developing state-of-the-art
cell engineering solutions for hair loss, a pervasive condition
3970 Johns Creek Ct with extremely negative effects on the lives of millions of people.
Located in Atlanta GA and Philadelphia PA, Aderans Research
Ste 500 is the pioneering arm of two great companies in the hair
Suwanee, GA 30024 restoration world: Aderans Company, Ltd, the world’s largest
770-871-4500 manufacturer of wigs; and Bosley, the largest hair transplant
company in the world.
Aderans Research
www.fishersci.com 2211 Newmarket Parkway, Suite 142
Marietta, GA 30067
1-678-213-1919
www.aderansresearch.com

“Stasco people make the difference!”

Plumbing and piping systems


construction, engineering and design.

Compliments of
Piedmont Center
1391 Cobb Parkway N.
Marietta, GA 30062

770-422-7118
www. stasco-mech.com

Advertisements
Albany~Augusta~Atlanta
Environmental Engineering
Water Resources Engineering
Sewer and Stormwater
Civil Site
Transportation Design
Operations and Permitting
Surveying / GIS / Mapping
Funding and Planning Assistance
Operations and Permitting www.speng.com

5036 B.U. Bowman Drive


Burord, GA 30518
Phone: 770-904-4444
Fax: 770-904-0888

Cunningham Forehand
Matthews & Moore
Architects, Inc.
2011 MANCHESTER STREET, N.E.
ATLANTA, GEORGIA 30324
404.873.2152

Spring 2010 : The Tower


Advertisements
TT
U


R

J

 V
I
I
:
he
ffof
yt
a
db
0st
duce
9-201
o
Pr
200

You might also like