Professional Documents
Culture Documents
Series Editors:
J. Chambers
D. Hand
W. Hardle
Andreas Krause
Melvin Olson
Andreas Krause
Novartis Pharma AG
Basel 4002
Switzerland
Series Editors:
J. Chambers
Bell Labs, Lucent
Technologies
600 Mountain Ave.
Murray Hill, NJ 07974
USA
Melvin Olson
Novartis Pharma AG
Basel 4002
Switzerland
D. Hand
Department of Mathematics
South Kensington Campus
Imperial College, London
London
SW7 2AZ
United Kingdom
W. Hardle
Institut fur Statistik und
konometrie
O
Humboldt-Universitat zu Berlin
Spandauer Str. 1
D-10178 Berlin, Germany
e-ISBN: 0-387-22708-3
(SBA)
Preface
This is now the fourth edition of The Basics of S-Plus since 1997. S-Plus
saw a steady growth in popularity, and it established itself in many educational and business places as a major data analysis tool. S-Plus is valued
for its modern, interactive data analysis environment, whether it is the primary system or a complement to other standards like SAS (the latter is in
particular true for the industry we work in, pharmaceuticals).
We have followed the various releases with new editions of our book,
introducing over time major changes like the incorporation of S Version 4
(the underlying language), Trellis graphs, a graphical user interface, in
particular for the Windows operating system, and a chapter on R and its
dierences to S-Plus (that are minor for the material covered in this book).
This edition is an update from edition 3 to cover new functions and features
of S-Plus Version 7.0 (working from the beta release for MS Windows and
Linux), adding more practical tips and examples, and correcting a few
mistakes.
We are very grateful to all our readers, in particular those sending us
suggestions, comments, and any other kind of feedback. You will see some
of these reected in the book.
The number of new books appearing on S-Plus and its counterpart,
R, continues to grow substantially each year. If you want to get an impression of the books available, Insightful maintains a list of S-Plus
references on their web site, http://www.insightful.com/. The list currently
comprises about 50 books and manuscripts. You can also search for the
string Splus or S-Plus (without quotes) in an online bookstore, such
as http://www.amazon.com/.
vi
Preface
Some of the books on S-Plus are Analyzing Medical Data Using S and
S-Plus, by Everitt and Rabe-Hesketh; Regression Modeling Strategies,
by Harrell; Bootstrap Methods and Permutation Tests by Hesterberg,
Monaghan, et.al., Applied Statistics in the Pharmaceutical Industry,
edited by Millard and Krause; Mixed Eects Models in S and S-Plus, by
Pinheiro and Bates; Modeling Survival Data: Extending the Cox Model,
by Therneau and Grambsch; and the fourth edition of Modern Applied
Statistics with S-Plus as well as the second edition of S Programming,
both by Venables and Ripley.
This book is intended to provide a head-start into the world of S-Plus
(and serves as well as an introduction to R). We are going to introduce you
to S-Plus by taking you through interactive sessions, entering commands
and later analyzing data, pointing out tips and pitfalls, and complementing
each session with exercises and detailed solutions.
If you have never worked with S-Plus before and want to get a rst
impression, take a look at A First Session, A Second Session, and
Exploring Data to get an idea of the philosophy of the programming language. If you intend to work with point-and-click graphical user interfaces,
take a look at Graphical User Interfaces. We are convinced though that
if you want to work permanently with S-Plus, you will need to use the
command line and appreciate it soon.
The book originates from lectures such that each chapter can serve as the
basis for a 90-minute lecture. The exercises reinforce the material covered
and sometimes point out a few more details. It should be interesting to
work out a solution to an exercise before comparing it to the solution given
here. Theres never a single solution.
Finally, we would like to thank all those who have directly or indirectly
contributed to this edition and previous ones. For this edition, Chyi-Hung
Hsu, Michael Merz and Jose Pinheiro gave very helpful comments on the
manuscript. John Kimmel at SpringerVerlag is simply a great person to
work with, dedicated and professional. David Smith, Michael OConnell
and others at Insightful, Inc. (Seattle) continue to be supportive partners
by providing us with alpha and beta releases of upcoming S-Plus versions.
Following the discussion forum s-news provides an insight into frequently
asked questions and problems that S-Plus users face. And last but not
least, various people from all over the world supplied comments on earlier
editions of the book, pointed out possible improvements and errors and
provided a good motivation to revise and update the book with the current
edition 4 which you happen to be reading right now.
We are always interested in any kind of comment that you might have
about the book. You are welcome to contact us by sending an E-mail
to Andreas.Krause@Novartis.com and Melvin.Olson@Novartis.com. Note
though that we will not be able to provide general support, like answering programming questions. For these purposes please refer to the sections
where we describe how to get help and support (pp. 413. and 422.). You
Preface
vii
might also want to check if your institution has a contract with Insightful
that covers support.
Finally, a Web page is set up to accompany this book:
http://www.elmo.ch/doc/splus-book/
It provides information about the book, links to relevant Internet pages,
and a list of errata (which we are condent will be fairly short).
Basel, Switzerland
April 2005
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
xvii
xxi
1 Introduction
1.1
The History of S and S-Plus . . . . . . . . . . . . . . .
1.2
S-Plus on Dierent Operating Systems . . . . . . . . .
1.3
Notational Conventions . . . . . . . . . . . . . . . . . . .
1
2
4
6
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9
9
10
11
11
11
12
12
12
13
13
13
15
15
16
18
Contents
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
Object Explorer . . . . . . .
Help . . . . . . . . . . . . .
Data Export . . . . . . . . .
Working Directory . . . . .
Data Import . . . . . . . . .
Data Summaries . . . . . . .
Graphs . . . . . . . . . . . .
Trellis Graphs . . . . . . . .
Linear Regression . . . . . .
PowerPoint (Windows Only)
Excel (Windows Only) . . .
Script Window . . . . . . . .
UNIX/Linux GUI . . . . . .
Summary . . . . . . . . . . .
Exercises . . . . . . . . . . .
Solutions . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
19
21
23
24
27
29
36
38
42
44
45
47
56
57
58
3 A First Session
3.1
General Information . . . . . . . . . . .
3.1.1 Starting and Quitting . . . . . .
3.1.2 The Help System . . . . . . . .
3.1.3 Before Beginning . . . . . . . .
3.2
Simple Structures . . . . . . . . . . . .
3.2.1 Arithmetic Operators . . . . . .
3.2.2 Assignments . . . . . . . . . . .
3.2.3 The Concatenate Command: c .
3.2.4 The Sequence Command: seq .
3.2.5 The Replicate Command: rep .
3.3
Mathematical Operations . . . . . . .
3.4
Use of Brackets . . . . . . . . . . . . .
3.5
Logical Values . . . . . . . . . . . . . .
3.6
Review . . . . . . . . . . . . . . . . . .
3.7
Exercises . . . . . . . . . . . . . . . . .
3.8
Solutions . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
73
73
74
75
75
76
76
77
79
80
81
82
84
85
88
91
92
4 A Second Session
4.1
Constructing and Manipulating Data
4.1.1 Matrices . . . . . . . . . . . .
4.1.2 Arrays . . . . . . . . . . . . .
4.1.3 Data Frames . . . . . . . . . .
4.1.4 Lists . . . . . . . . . . . . . .
4.2
Introduction to Functions . . . . . .
4.3
Introduction to Missing Values . . . .
4.4
Merging Data . . . . . . . . . . . . .
4.5
Putting It All Together . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
95
95
96
101
104
107
108
109
110
111
.
.
.
.
.
.
.
.
.
Contents
4.6
4.7
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 Graphics
5.1
Basic Graphics Commands . . . . . . . . . . . .
5.2
Graphics Devices . . . . . . . . . . . . . . . . .
5.2.1 Working with Multiple Graphics Devices
5.3
Plotting Data . . . . . . . . . . . . . . . . . . .
5.3.1 The plot Command . . . . . . . . . . .
5.3.2 Modifying the Data Display . . . . . . .
5.3.3 Modifying Figure Elements . . . . . . . .
5.4
Adding Elements to Existing Plots . . . . . . .
5.4.1 Functions to Add Elements to Graphs .
5.4.2 More About abline . . . . . . . . . . .
5.4.3 More on Adding Axes . . . . . . . . . .
5.4.4 Adding Text to Graphs . . . . . . . . . .
5.5
Setting Options . . . . . . . . . . . . . . . . . .
5.6
Figure Layouts . . . . . . . . . . . . . . . . . .
5.6.1 Layouts Using Trellis Graphs . . . . . .
5.6.2 Matrices of Graphs . . . . . . . . . . . .
5.6.3 Multiple-Screen Graphs . . . . . . . . . .
5.6.4 Figures of Specied Size . . . . . . . . .
5.7
Exercises . . . . . . . . . . . . . . . . . . . . . .
5.8
Solutions . . . . . . . . . . . . . . . . . . . . . .
xi
114
116
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
125
125
126
128
128
129
130
131
133
133
135
135
137
138
140
140
140
141
142
145
146
6 Trellis Graphics
6.1
An Example . . . . . . . . . . . . . . . . . . . . . .
6.2
Trellis Basics . . . . . . . . . . . . . . . . . . . . . .
6.2.1 Trellis Syntax . . . . . . . . . . . . . . . . .
6.2.2 Trellis Functions . . . . . . . . . . . . . . .
6.2.3 Displaying and Storing Graphs . . . . . . .
6.3
Output Devices . . . . . . . . . . . . . . . . . . . .
6.4
Customizing Trellis Graphs . . . . . . . . . . . . . .
6.4.1 Setting Options . . . . . . . . . . . . . . . .
6.4.2 Arranging the Layout of a Trellis Graph . .
6.4.3 Ordering of Graphs . . . . . . . . . . . . . .
6.4.4 Axis Customization . . . . . . . . . . . . . .
6.4.5 Modifying Panel Strips . . . . . . . . . . . .
6.4.6 Arranging Several Graphs on a Single Page
6.4.7 Updating Existing Trellis Graphs . . . . . .
6.4.8 Writing Panel Functions . . . . . . . . . . .
6.5
Further Trellis Hints . . . . . . . . . . . . . . . . .
6.5.1 Useful General Trellis Settings . . . . . . . .
6.5.2 Graphing Individual Proles . . . . . . . . .
6.5.3 Preparing Data to Use for Trellis . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
153
154
156
156
157
157
158
160
160
161
163
164
165
165
167
168
171
172
173
174
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
xii
Contents
.
.
.
.
.
175
175
177
181
183
7 Exploring Data
7.1
Descriptive Data Exploration . . . . . . . . . . . . . . .
7.2
Graphical Exploration . . . . . . . . . . . . . . . . . . .
7.2.1 Interactive Dynamic Graphics . . . . . . . . . . .
7.2.2 Old-Style Graphics . . . . . . . . . . . . . . . . .
7.3
Distributions and Related Functions . . . . . . . . . . .
7.4
Conrmatory Statistics and Hypothesis Testing . . . . .
7.5
Missing and Innite Values . . . . . . . . . . . . . . . . .
7.5.1 Testing for Missing Values . . . . . . . . . . . . .
7.5.2 Supplying Data with Missing Values to Functions
7.5.3 Missing Values in Graphs . . . . . . . . . . . . .
7.5.4 Innite Values . . . . . . . . . . . . . . . . . . . .
7.6
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . .
193
193
204
219
219
220
225
231
232
232
233
233
235
238
8 Statistical Modeling
8.1
Introductory Examples . . . . . . . . .
8.1.1 Regression . . . . . . . . . . . .
8.1.2 Regression Diagnostics . . . . .
8.2
Statistical Models . . . . . . . . . . . .
8.3
Model Syntax . . . . . . . . . . . . . .
8.4
Regression . . . . . . . . . . . . . . . .
8.4.1 Linear Regression and Modeling
8.4.2 ANOVA . . . . . . . . . . . . .
8.4.3 Logistic Regression . . . . . . .
8.4.4 Survival Data Analysis . . . . .
8.4.5 Endnote . . . . . . . . . . . . .
8.5
Exercises . . . . . . . . . . . . . . . . .
8.6
Solutions . . . . . . . . . . . . . . . . .
6.6
6.7
. . . . . . . . . .
. . . . . . . . . .
Panel Functions
. . . . . . . . . .
. . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
251
251
251
253
255
256
257
258
261
263
265
267
268
271
9 Programming
9.1
Lists . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.1.1 Adding and Deleting List Elements . . . . . .
9.1.2 Naming List Elements . . . . . . . . . . . . .
9.1.3 Applying the Same Function to List Elements
9.1.4 Unlisting a List . . . . . . . . . . . . . . . . .
9.1.5 Generating a List by Using split . . . . . . .
9.2
Writing Functions . . . . . . . . . . . . . . . . . . . .
9.2.1 Documenting Functions . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
285
285
287
288
290
294
294
294
297
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
Techniques
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
9.2.2
9.2.3
9.2.4
9.3
9.4
9.5
9.6
9.7
9.8
Scope of Variables . . . . . . . . . . . .
Parameters and Defaults . . . . . . .
Passing an Unspecied Number
of Parameters to a Function . . . . . .
9.2.5 Testing for Existence of an Argument .
9.2.6 Returning Warnings and Errors . . . .
9.2.7 Using Function Arguments in Graphics
Iteration . . . . . . . . . . . . . . . . . . . . .
9.3.1 The for Loop . . . . . . . . . . . . . .
9.3.2 The while Loop . . . . . . . . . . . . .
9.3.3 The repeat Loop . . . . . . . . . . . .
9.3.4 Vectorizing a Loop . . . . . . . . . . .
9.3.5 Large Loops . . . . . . . . . . . . . . .
Debugging: Searching for Errors . . . . . . . .
9.4.1 Syntax Errors . . . . . . . . . . . . . .
9.4.2 Invalid Arguments . . . . . . . . . . .
9.4.3 Execution or Run-Time Errors . . . . .
9.4.4 Logical Errors . . . . . . . . . . . . . .
Output Using the cat Function . . . . . . . .
The paste Function . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . .
10 Object-Oriented Programming
10.1 Creating Classes and Objects
10.2 Creating Methods . . . . . . .
10.3 Debugging . . . . . . . . . . .
10.4 Help . . . . . . . . . . . . . .
10.5 Summary and Overview . . .
10.6 Exercises . . . . . . . . . . . .
10.7 Solutions . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Contents
xiii
. . . . . .
. . . . . .
297
298
. . . .
. . . .
. . . .
Labels
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
300
301
301
302
303
303
304
305
305
307
308
309
310
310
311
314
316
318
319
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
323
325
328
333
334
334
335
336
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
349
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
349
350
352
353
354
355
356
356
358
359
xiv
Contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
401
401
402
402
404
404
405
406
407
408
410
411
Contents
15.4
Development . . . . . . . . . . . . . . . . .
Some Similarities Between R and S . . . .
Some Dierences Between R and S . . . .
15.3.1 Language . . . . . . . . . . . . . .
15.3.2 Libraries . . . . . . . . . . . . . . .
15.3.3 Trellis-Type Graphs . . . . . . . . .
15.3.4 Colors and Lines . . . . . . . . . .
15.3.5 Data Import and Export Formats .
15.3.6 Memory Handling . . . . . . . . . .
15.3.7 Mathematical Formulae in Graphs
15.3.8 Graphical User Interfaces . . . . . .
15.3.9 Start-Up Mechanism . . . . . . . .
15.3.10 Windows Integration . . . . . . . .
15.3.11 Support . . . . . . . . . . . . . . .
Summary . . . . . . . . . . . . . . . . . . .
16 Bibliography
16.1 Print Bibliography . . . . . . .
16.2 On-Line Bibliography . . . . . .
16.2.1 S-Plus Related Sources
16.2.2 TEX-Related Sources . .
16.2.3 Other Sources . . . . . .
Index
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
xv
. . . .
Users
. . . .
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
413
413
414
414
415
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
417
418
418
418
419
420
420
421
421
421
421
421
422
422
422
423
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
425
425
427
427
429
429
431
Figures
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
10
12
14
19
20
20
22
24
25
26
26
28
28
29
31
32
32
33
35
36
37
39
39
40
xviii
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.44
2.45
2.46
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
Figures
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
43
45
46
47
48
49
50
50
51
52
52
53
54
54
55
55
58
59
60
61
62
62
63
64
64
65
68
69
69
70
71
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
131
132
133
138
143
144
146
148
149
150
151
5.1
Figures
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
6.10
6.11
6.12
xix
154
155
162
163
166
167
170
174
180
182
185
7.1
7.2
7.3
7.4
7.5
205
205
206
207
187
188
191
192
7.6
7.7
7.8
7.9
7.10
7.11
7.12
7.13
7.14
7.15
7.16
7.17
7.18
7.19
208
209
211
213
214
215
216
217
218
225
226
227
238
243
248
8.1
8.2
8.3
8.4
254
278
279
283
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
xx
Figures
9.1
320
331
332
336
362
367
370
395
Tables
1.1
1.2
5
7
2.1
13
3.1
3.2
3.3
3.4
3.5
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
83
84
86
94
94
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
127
129
130
132
134
135
136
. . . . . .
. . . . . .
. . . . . .
136
137
139
Trellis functions . . . . . . . . . . . . . . . . . . . . . . . .
157
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
6.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
xxii
Tables
6.2
6.3
6.4
7.1
7.2
7.3
7.4
7.5
7.6
7.7
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
S-Plus
200
200
219
219
220
221
. . . . .
222
. . . . .
. . . . .
. . . . .
222
228
247
8.1
8.2
8.3
8.4
8.5
.
.
.
.
.
.
.
.
.
.
252
256
257
263
266
9.1
9.2
315
315
. . . . . .
in S-Plus
. . . . . .
. . . . . .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
facility
. . . . .
facility
. . . . .
160
161
168
351
351
373
1
Introduction
Over the years, the S language and S-Plus have undergone many changes.
Since its development in the mid-1970s, the three main authors of S, Rick
Becker, John Chambers, and Allan Wilks, have enhanced the entire language considerably. All their work was done at Bell Labs with the original
goal of dening a language to make it easier to do repetitive tasks in data
analysis, such as calculating a linear model.
In the following years, many people contributed to the S project in one
form or another. People outside Bell Labs also became aware of the interesting development and took part in it, and this is to a great extent the
way S and S-Plus are still developed today. A very lively user community
works with and on S/S-Plus, and they especially appreciate the S style
of working in an integrated environment. Special strengths are the modern
and exible language and high quality graphics generation.
It is noteworthy that the authors do not consider S as a primarily statistical system. The system was developed to be exible and interactive,
especially designed for doing data analysis of an exploratory nature, which
began to boom in the late 1970s after the release of Tukeys book (1977)
on the subject. Most of the statistical functionality was added later, and
many statistics routines, such as estimation, regression, and testing, were
incorporated by the S-Plus team.
This chapter describes the development of S and S-Plus over the
years, claries the dierences between the two, and points to some further
references.
1. Introduction
1. Introduction
models such as linear regression, generalized linear models, principal components, and clustering on big data sets. The GUI is revised and now based
on a new toolkit, Eclipse. It comes with a workbench to develop S-Plus
programs, track changes, and source control facilities. These new functionalities are only available for the newly introduced Enterprise Developer
edition though and not part of the Professional Developer edition.
S is still the heart of the system, and the core S team continues to work
on the S system. The whole S functionality is incorporated in S-Plus and
enhanced, and today the S system is no longer publicly available. In the
remainder of the book, we will use S-Plus as the standard reference.
In addition to S-Plus, the R project started in 1997. R is a software that
is not unlike S and freely available. It has become increasingly popular
in recent years since a core developer team continuously improved and
developed the system further. A fairly large group of supporters around
the world contributes libraries, in very much the way S started out a long
time before. This group of people includes many of the contributors of
S-Plus libraries. We will point out some of the similarities and dierences
between the two systems in a separate chapter.
System requirements
(recommended minimum)
1. Introduction
to the plus sign (+) and you can enter the remainder of the command. A
preview of this is shown below.
> 3+(4*2
+ )
We mostly omit the continuation prompt, as the code is easier to read and
so that the + sign is not accidentally typed in as a plus operator. For
longer program codes, we omit the prompt, too, as we do not assume interactive input but the use of an editor instead. S-Plus objects or expressions
are written in a special font as with the example print.
There are a few examples of commands to either the UNIX or DOS shells.
For these examples, no prompt is used.
When presenting commands, we sometimes include descriptive text. The
descriptive text is written in S-Plus syntax for comments. Anything after
the number sign # until the end of the line is treated as a comment and
not interpreted as a command.
A summary of these conventions is found in Table 1.2.
Table 1.2. Notational conventions.
Convention Explanation
S-Plus prompt
Command has continued onto next line (displayed by
S-Plus but omitted in this monograph)
Commands S-Plus commands are shown in typewriter font
No prompt For calls to the UNIX or DOS shell
#
Comment symbol indicating start of a comment
placeholders Need to be replaced by an appropriate expression, such as
lename, which needs to be replaced by a valid le name
Ctrl-C
Press and hold down the Ctrl key and press C
Menu
Menu entries and buttons are referred to in this font
Note
Notes point out something important, such as an example,
an application, or an exception. The end of a note is
marked by the symbol
>
+
2
Graphical User Interface
2.1 Introduction
S-Plus has an excellent graphical user interface (GUI) that allows the
user to access its functionalities by pointing and clicking with the mouse.
This convenient way of working with S-Plus was previously only available
for the Windows platform but has now been added for the UNIX/Linux
platform as well.
The advantage of the GUI for the novice of S-Plus is that you dont
have to know the syntax of S-Plus to get started. All you need is a little
familiarity with typical Windows software and a data set in some sort of
standard format.
The descriptions in this chapter are, for the most part, based on S-Plus
for Windows. Users of UNIX/Linux will benet from this chapter, despite
it being aimed at the Windows user of S-Plus, as we will show typical
applications and examples and point out major dierences between the two
operating systems. The chapter will end with a section dedicated solely to
the GUI available with UNIX/Linux.
The approach we will take in this chapter is to quickly introduce the
S-Plus system design under Windows, show the briefest of explanations
of how it functions, and nish by describing in detail the various tasks that
will be needed to complete a data analysis, from data input to printing
and saving the results. It is by no means intended to be an extensive or
exhaustive exploration of the GUI but merely a way of familiarizing you
10
with its structure, where to nd things, and, most importantly, where and
what to try for more detailed options.
Figure 2.1. The S-Plus screen and its components: the Object Explorer, the
Commands window, and the toolbars.
11
For those who are not that comfortable with window- and icon-based
software, the following subsections provide a crash introduction to the essentials. The pull-down menus across the top are used to group categories
of commands or options to be set. Under the File menu, we nd actions
relating to les (Open, Close, Import Data, Save) as well as to exiting
the system. The toolbars below the menus contain buttons that are convenient shortcuts to commands found through the layers in the menus. Some
of the toolbar buttons (e.g., 2D Plots) open a palette containing a myriad
of options to complete your task.
An online help facility is available through the main menu. The typical
approach to handling the help system (i.e., search by index or topic) has
been used. The big advantage of the help system in S-Plus is that it
includes online manuals. We encourage liberal use of the help system.
2.2.1
Using a Mouse
Using a mouse eciently is important to get the most out of the system.
Clicking once on the left mouse button is usually used to highlight an item
in a list (e.g., a le out of a list of les) or to select a menu heading or
button from a toolbar. You will not always be able to guess at the function
of a toolbar button merely by looking at the icon, but if you are at a loss,
simply position the mouse over the button in question and a short text
description will appear below it. Double-clicking on the left mouse button
is used to select and execute. Examples include double-clicking on a le
name to select it and start the function, or on a part of a graph to select
and edit it. Clicking once with the right mouse button opens a context
menu that changes depending on the item selected.
2.2.2
Object Explorer
2.2.3
Commands Window
The Commands window is actually the heart of the S-Plus system. Every command that is performed via menus and buttons can be issued as a
12
2.2.4
Toolbars
An additional feature of the toolbars in S-Plus is that they are contextsensitive (smart). Open a graph sheet, for example, and extra buttons will
appear in the toolbar area that are specic to graph sheets.
2.2.5
Graph Sheets
2.2.6
Script Window
The Script window also consists of two panes, one on top and one on the
bottom. The top pane can be used as a development space in which to create
or ne-tune a section of code. When the commands in the top pane are run,
the output from them appears in the bottom pane. In function, the Script
window is similar to the Commands window, the dierence being that
the former runs commands in segments, whereas the latter is completely
interactive and executes commands one at a time as they are input. More
details of the layout of the Script window and how it operates are provided
in Section 2.16.
13
2.3.1
Importing Data
You probably have your own data that you want to analyze, so the rst
thing you have to know is how to import it into S-Plus. There is an
Import Data facility located in the File menu. In the From File dialog,
you will be asked for the name you want for your data (which may be more
than eight characters) and have the usual Windows boxes for specifying
the name and location of your data le using the Browse button. Pay
careful attention to the type of data (File Format pull-down menu) that
you have and properly dene it in the corresponding box. The data le
types available are shown in Table 2.1.
Table 2.1. File types available with the Import From File facility.
File
ASCII
Excel
Lotus 1-2-3
MS SQL Server
SAS Transport
SPSS Portable
Format
ASCII formatted dBase
FoxPro
Gauss
Matlab
Minitab
Paradox
QuattroPro
SigmaPlot
S-Plus
Stata
Systat
Epi Info
Gauss UNIX
MS Access
SAS
SPSS
When the spreadsheet containing newly imported data is closed, the new
data automatically appear as a new entry in the Object Explorer.
2.3.2
Graphs
Graphs are created by connecting the data shown in the Object Explorer
to a graphical function represented by a button in a palette. In the Object
Explorer, the data are displayed with its full name, and after a click on
the data objects name, the elements of the selected data set are displayed
in the right-hand pane of the window. By clicking on these elements, the
set of variables to be plotted can be selected. To select a second and third
variable, hold down the Ctrl key and click on the element.
Before actually displaying the data graphically, the graph palettes need
to be opened if they are not open yet. Open the 2D or 3D graph palette by
14
using the appropriate icon in the toolbar. The 2D graph palette appears
in Figure 2.3 and shows all the types of 2D graphs available.
Once the palette is open, you might want to move the mouse over the
dierent icons slowly. If the mouse stops for a second, the name of the
method represented by the button is displayed. Select the plot type by
clicking on it. S-Plus executes the graphical method right away by using
the selected data.
Note For creating a graph, the order of selection of variables is important.
The variable selected rst becomes the x-variable, the second variable selected becomes the y-variable, and if a third variable is selected, it becomes
the z-variable.
Once data have been selected, another graphical method can be applied
simply by clicking on a dierent graph button in the palette.
Note If you click on a graphics button and a graph is set up but not
drawn, this probably means that the data selected and the graph type
chosen are not compatible. Selecting two variables and clicking on a 3D
graphics method is an example of such a situation.
Once a basic graph has been created, it can be edited. Double-click on
the element to modify, for example, the axis. A window opens and oers
the possibility of modifying all the components such as range, tickmarks,
color, width, label, and much more.
15
Using the Annotation palette that appears once a graph sheet is open,
further elements, such as text, or graphics symbols, such as rectangles and
arrows, can be inserted into the graph.
A graph sheet may be printed using File - Print Graph Sheet... or
saved using File - Save. All that is left is to specify the destination le
name and directory.
2.3.3
2.3.4
16
2.3.5
Chapters
S-Plus gives you the ability to save les from dierent projects into different areas called chapters. Chapters provide an ecient mechanism for
project management. All elements open during a session, such as Object
Explorer and graph sheet, can be saved to a chapter such that on reopening
the chapter, everything is as you left it. Since S-Plus keeps all data items
ever created, we highly recommend the use of chapters as an easy way of
organizing your work and switching between project les.
The actual operation of chapters is quite straightforward. Under FileChapters you have the options of Attach/Create, Detach, and New
Working.
Create a new chapter by choosing the Attach/Create option and specifying the directory where it should be located. There is a space for dening
a label for the new chapter that will appear in the search path. Be sure to
choose position 1. S-Plus will create several subdirectories that it needs,
including a .Data directory where data are stored. The new chapter is
completely empty.
We want to create some data in our new chapter so that when we switch
between chapters we can verify that they actually perform the way we want
them to and keep our projects separate.
Creating Data
Click on Data, then on Random Numbers...
For Data Set:, specify test
For Sample Size:, specify 50
17
Choose OK
The rst entry should appear in the Object Explorer.
Click on Data, then on Transform
For Data Set:, specify test (chosen by default)
For Target Column:, specify Square
For Variable:, specify Sample
For Function:, specify X2
Click on Add
Sample2 appears in Expression:
Choose OK
Can see that the test data sheet has been opened and contains the
variables Sample (the original Gaussian distributed data) and Square
(the square of the variable Sample).
The new data set appears in the right pane of the Object Explorer. We
can switch back to the default chapter by choosing File-Chapters-New
Working Chapter... and choosing the default user directory. New refers
to the fact that you are dening which of the existing chapters to use, not
that you are creating a new one. Does the newly created data set (test)
appear in the Object Explorer of the default chapter? Switch back to the
new chapter. Is test the only data set that appears in the Object Explorer
here?
Note The New Working Chapter option automatically detaches the
old chapter as it attaches the new one. However, you do not have to detach
the old chapter when accessing a new one. You could simply attach the
new chapter and assign it to position 1 (File-Chapters-Attach/Create),
moving the old chapter further down the search path. This action has two
implications. Data sets from a nonactive chapter cannot be seen through
the Object Explorer but can be accessed. If a data set in the active chapter
has the same name as one in a nonactive chapter, the one in the active
chapter will be accessed.
In summary, the chapter feature is a convenient way to help keep yourself
organized. We highly recommend that you begin using it right from the
start. For practice with chapters, try creating one for each chapter of
the book and keeping chapter-specic data sets in the respective S-Plus
chapter.
18
2.6. Help
19
2.6 Help
At some point, you will want to use (or have to use) the help system in
S-Plus, and we have reached a point where it is possible to explain this
20
simple procedure. Suppose we want to export the Puromycin data but dont
know how to start. Simply invoke the help system to nd out how.
21
22
Exporting Data
Click on Puromycin in the left-hand pane of the Object Explorer
Click on the File main menu
Choose Export Data-To File . . .
Use the Browse button to specify a location for the le, if desired
Specify the le type using the Files of type: pull-down menu
We chose SAS Version 7/8/9 - Windows (sas7bdat; sd7) and
suggest that you do as well (you dont need to have SAS installed on
your computer).
Puromycin is already highlighted in From Data Frame: (see
Figure 2.7).
23
24
You can now start S-Plus by double-clicking on the new icon, which
will have the eect of working in the directory specied in the path name.
Open the Object Explorer and verify that the new S-Plus icon is using
the database (empty) dened by the new path name (we did this earlier).
Importing Data
Choose File from the main menu
Choose Import Data
Choose From File . . .
Use the Browse button to locate the Puromycin data set (in SAS
format) that we created earlier and highlight it by clicking on it (and
choose OK if necessary)
25
As a reminder, you should always check the data to make sure that there
are no unpleasant surprises.
Notice that the puro data set is currently sorted by state and then by
conc. It would make sense to have the data sorted by vel as well. The
following steps can be used to sort a data set.
Sorting Data
Click on the Data sheet to be sorted
26
See Figure 2.11 for the resulting sorted data set. Note that the rst 2
rows both have state=treated and conc=0.02 but that now the rst row
has a smaller vel (=47.00) than the second row (vel=76.00). In fact, the
original order of the rows appears in a column just to the left of the data,
reecting the fact that these two rows have switched positions.
27
S-Plus refers to this data set as a data frame. Data frame is the
term used by S-Plus to denote a rectangular collection of data, possibly
including variables with dierent data types (e.g., character and integer).
Many data sets typically encountered are actually data frames.
28
2.11. Graphs
29
2.11 Graphs
The problem with data summaries is that you dont actually see the
data itself. For a more visual summary of our data, we need to construct a
graph. There are actually several dierent ways of constructing a graph in
S-Plus using the GUI (menu buttons), but perhaps the easiest and most
exible method is to use the palettes. The palettes exist only with S-Plus
for Windows, but the UNIX/Linux user can do the same things using the
main menu options.
30
Although the default graph looks quite good, well wait until after we
explain some editing techniques before actually showing it. UNIX/Linux
users dont have the convenience of editing the graph after the fact. Instead,
the details of the appearance of the graph have to be specied upfront.
This leads us to the topic of how to edit a graph once it has been created.
In particular, we are going to change the color and symbol for the points
on the scatterplot so that they will be more visible.
2.11. Graphs
31
32
We have now changed the label and font size for the x-axis and leave it
up to you to repeat the procedure for the y-axis.
Adding a title to a graph is most easily done using Insert from the main
menu. The exact usage is explained below.
2.11. Graphs
33
Adding a Title
Click on Insert in the main menu
Click on Titles
Choose Main...
Type in the desired title (Use Scatterplot)
Click outside the title box to end
Change the font as desired (see Changing the Font)
Change the box as desired
34
refers to the number of the entry in the data sheet. Choose some points
and look at their values and indices.
There are two very useful buttons on the graph sheet toolbar that open
the Graph Tools palette and the Annotation palette. By holding the cursor
over the individual buttons in the palettes, you can see what their functions
are. Explore the possibilities open to you with these and other buttons. We
created the graph in Figure 2.18 with just a few clicks of the mouse.
Now that the graph looks exactly the way we want, were ready to print
it and save it for possible future use.
Printing a Graph
Click anywhere on the graph sheet to make it the active window
Choose File from the main menu
Choose Print Graph Sheet. . .
Make sure the settings for the printer are correct.
Choose OK
2.11. Graphs
35
36
37
38
The only dierence in the nal graphs between the two approaches is
that the same symbol is used in all panels with the drag-and-drop method,
whereas each panel gets a dierent color/symbol using the menu approach.
The resulting message that we can draw from the Trellis graph is
that the relationship between concentration and velocity is the same for
state=treated as it is for state=untreated.
39
40
You will recall that we did not specify that any of the residual plots
should be created, but we do not have to start from the beginning. Notice
41
that the object my.reg now appears in the right-hand side of the Object
Explorer. Right-click on my.reg and choose Plot.... The original menu for
the residual plots appears, and we can now specify which plots we want to
explore. Choose Residuals vs Fit, Response vs Fit, Residuals Normal QQ,
and Cooks Distance as a start by ticking the appropriate boxes and then
choosing OK. A graph sheet is created with four pages, one for each plot
(not shown).
Note It should be mentioned at this point that graphs produced by statistical dialogs can be either editable or noneditable. The noneditable versions
have the advantages of being quicker to produce and requiring less space for
storage, but the disadvantages are that they cannot be edited to change
labels, add annotations, and so forth. Since most computers have ample
computing speed and space, we recommend the use of editable graphs. The
desired type of graph can be dened using Options - Graphs Options...
and ticking (or unticking) the desired default. The option Create Editable Graphics appears twice in the window and both should be ticked
or unticked (as desired). The next section will not be possible to perform
unless the residual plots were created as editable graphs (i.e., change the
option and redo the graphs above).
If we look at page 1 of the graph sheet, however, notice that the plot
of residuals (y-axis) versus tted values (x-axis) does not look like a cloud
of points randomly distributed about the line y = 0. Furthermore, three
points have been indicated as suspect: points 1, 10, and 23. A smoothed
line has been added to make it all that much more apparent that there
is a pattern to the points. Page 2 of the graph sheet shows the response
variable (vel) versus the tted values (x-axis) with an overlaid regression
line (dotted) and a smoothed line (solid). Notice that the points, from left
to right, are mostly below the line, mostly above the line, and then mostly
below the line again. This trend indicates a poor t of the model. Page
3 shows the residuals (y-axis) versus the quantiles of the standard normal
(x-axis), which also indicates the points 1, 10, and 23. The fourth page
contains Cooks distance (y-axis) versus an index (count of the order of
the points) on the x-axis. Here, we see that points 1, 13, and 23 have the
largest values of Cooks distance.
The graphs suggest that the simple linear regression provides a poor t
to the data, but you will perform a better analysis in the exercises.
This is a good place to mention that we can put multiple plots on a single
graph page by copying and pasting. This feature is particularly relevant and
helpful when trying to deal with a multipage graph sheet.
42
43
Choose scatter.sgr
Choose Add Graph. . . again
Choose residuals.sgr
Could add additional graphs using the same process
Click on Next
Choose Finish
PowerPoint is opened and creates the slides.
When complete, exit the wizard
It asks to save the lists of graphs you can, but choose No
PowerPoint is open and is the active window
Choose File from the main menu (in PowerPoint)
Choose Save As. . .
Specify the le name and location for the PowerPoint le
If several graph sheets have been saved, they can be put into the same
PowerPoint presentation or one per presentation if you prefer. If a graph
44
sheet containing more than one page has been saved, PowerPoint will create
one slide per page.
45
Running the Link Wizard creates a link so that the Excel data can be
accessed in future S-Plus sessions. Be sure to tick the box Allow S-Plus
to store link information in Excel. Selecting the Excel data name and
then clicking on the Update Link button will allow S-Plus to update its
version of the data to reect any modications that were made in Excel.
Other S-Plus functions can now be run directly on the Excel data.
46
47
Statistics, Data Summaries, and then Summary Statistics.... However, you could also ask for the mean to be calculated directly by specifying
mean(x) in either the Commands window (executed with the Enter key)
or in the Script window (executed with the Run button).
You may have noticed when you opened the Script window that three
buttons were added to the toolbar, as in Figure 2.28. The left button has a
right arrow for running scripts, the middle button is to stop the processing
of the commands in the script window, and the right button is for nding
specied text.
48
provide an overview of how to use the GUI under UNIX/Linux, and the
similarities and dierences will become apparent.
Help System
The help system on UNIX/Linux is slightly dierent than on Windows in
that there is really only one menu choice. Click on Help from the main
menu and the three choices are Contents, Index, and Search. It doesnt
matter which of the three choices is made as they are simply the three tabs
of the window that is opened. The Index tab is used to look at functions
and built-in S-Plus data sets, the Contents tab gives an overview, and
the Search tab can be used to search for information on any topic.
Chapters
Chapters must be specied manually. This means that if you are working
on two dierent projects and want to create two chapters to keep them both
separate, you have to create the chapters manually outside of S-Plus. Fortunately, the procedure is quite easy. Simply move to the desired directory
and issue the UNIX/Linux command Splus CHAPTER. This will have
the eect of creating a .Data subdirectory in which S-Plus will store data
from that chapter. To access data from that chapter, simply invoke S-Plus
from within that directory with the command Splus -g &. When S-Plus
is started, it opens with only the Commands window open and should look
like Figure 2.29.
49
50
Choose OK
The Data viewer is opened (similar to Data sheets in Windows)
(see Figure 2.32)
Graphs
The biggest change to keep in mind when creating graphs with the
UNIX/Linux GUI is that editing graphs works a little dierently. Once
you have clicked OK after specifying a graph, you wont be able to edit it.
The way around this is to click on Apply instead. That way, the dialog box
with all the tabs for specifying everything about the graph is still open and
51
able to accept modications. The only disadvantage is that every time Apply is clicked, a new graph window is opened. Other than that, the GUI
operates in much the same way as with Windows, and a demonstration
appears below.
Choose Graph from the main menu
Choose Scatter Plot
Dene Puromycin as the Data Set
Dene the x Axis Value and the y Axis Value from the pull-down
menu (x=conc, y=vel)
Conditioning on a variable will create a Trellis scatterplot as opposed
to a conventional one. Choose state as the conditioning variable (see
Figure 2.33)
Click on the Plot tab and notice that the available options are similar
to those in Windows
Change Color to black with the pull-down menu
Change the Symbol Style to Circle, Solid with the pull-down menu
(see Figure 2.34)
Tabs also exist for Fit, Titles, Axes, and Multipanel
Choose OK
The nal Trellis graph appears in Figure 2.35.
52
Summary Statistics
Summary statistics are calculated in the same way with the UNIX/Linux
GUI as with the Windows GUI, but an example of the choices and output
is shown for reference.
Choose Statistics from the main menu
53
Choose OK
A Report window is opened with the results as in Figure 2.37
Linear Regression
An example of how to perform linear regression with the UNIX/Linux GUI
is shown below, although it works the same way as with the Windows GUI.
Choose Statistics from the main menu
Choose Regression
Choose Linear. . .
Choose vel as the Dependent Variable from the pull-down menu
Choose conc as the Independent Variable from the pull-down
menu (see Figure 2.38)
54
55
56
2.18 Summary
The GUI you just looked at provides a convenient, easy-to-use approach to
unlocking the possibilities of S-Plus for the novice and expert alike. The
whole power of S-Plus is in a way hidden behind the simple clicks required
to use the GUI.
The remainder of the book will primarily focus on the S-Plus language.
You will complete several sessions of analyzing data and encounter a variety
of statistical routines along the way. You will see the commands involved
in running some of the procedures that you have already learned in this
chapter.
We strongly recommend that you do not stop here after seeing the easy
point-and-click approach to S-Plus. By continuing with the remaining
chapters, you will not only understand what S-Plus is doing and how it
works, but you will learn some things that you will really want to be able
to do that you simply cannot do with just the GUI.
2.19. Exercises
57
2.19 Exercises
Exercise 2.1
Perform a more complete analysis of the Puromycin data, including such
exploration as transformations of the predictor variable (square root and
log are typical), the eect of state, and the interaction of state and the
transformed variable. Be sure to use only the GUI to do your analyses and
visually check the t of the model.
Exercise 2.2
Using the search path, locate the S-Plus built-in Geyser data set. Use
the help system to nd out something about the data. Create a scatterplot.
(Hint: This data set is not a data frame, and you will have to be creative
to coerce it into one before you can create the plot.) What do you notice
about the points? Are they randomly distributed, linear, or grouped? Use
the Annotation palette and Graph Tools palette to group the points. Label
the groups of points. Calculate the means of each group (use the Grouping
variable option in the Summary Statistics menu). Write the means of
each group onto the scatterplot. Add a linear regression line to the graph.
Save the graph sheet and then make it into a PowerPoint presentation.
58
2.20 Solutions
Solution to Exercise 2.1
We know from our scatterplot that the relationship between the variables is
not particularly linear and perhaps look quadratic. We will try two transformations of the data and create scatterplots to make a rst check of which
model might t best. We will try the two transformations square root and
the natural logarithm (ln, although the function in S-Plus is log), but
rst we need to add these variables to our data frame.
Repeat the procedure to create the variable logconc using the log
function.
2.20. Solutions
59
We want to replot the data using our transformed variables to look for a
linear relationship. This time, however, we will add a smoothed line (curve)
to the data that will help us visualize any trends that might exist.
60
Notice that the general shape of the data and the smoothed line are much
more linear than with the original data but are still curved. Repeat the
process above using logconc, and you will nd that the log transformation
of the data is the one that produces what appears to be a linear relationship. Anyone who has worked with concentration data could probably have
guessed this right from the start.
Now that we are satised with a transformation on conc, we can go back
to the task of actually performing a better regression analysis.
2.20. Solutions
61
Choose Add
Choose OK
Click on log(conc) in the Variable section of the Formula dialog
Add Main Eect: (+)
Choose OK for Formula
On the Results tab, choose ANOVA table
On the Plot tab, click residual plots of interest
Choose OK for Linear Regression
The F -statistic is still highly signicant (F =146.9 on 1, 21 df, p=6.0
1011 ) (see Figure 2.44), but the graphs look much better this time (see
Figure 2.45).
62
open and we can modify the formula rather than restarting our denition
of it.
Reenter the commands from above to set up the model with log(conc)
as the predictor variable. Add state as a second predictor variable (as a
Main Eect), see Figure 2.46, and choose OK for the formula.
2.20. Solutions
63
This time, choose Apply for the regression. A new graph sheet is opened,
but the results are printed in the same Report window. The graphs still
look good, and we notice that state is signicant on its own as a main
eect. The output from this regression model appears in Figure 2.47, but
the residual plots have not been shown.
The last point we will cover is the interaction between log(conc) and
state. We will leave that to the reader to perform (i.e., the solution stops
here, but you can refer to Figures 2.48 and 2.49 to see if youre on track). We
will simply note that the interaction term is signicant, the interpretation
of which is also left up to the reader (do scatterplots by state).
64
Click on the + to the left of the icon for data and all the built-in
S-Plus data sets will be listed
Scroll down to nd the Geyser data set, right-click on its icon, and
choose Copy
At the very top of the Object Explorer, right-click on the Data icon
and choose Paste
2.20. Solutions
65
You have now copied the Geyser data set to your working data
folder.
Using the Object Explorer, you can see that it has two variables: waiting and duration (see Figure 2.50). To nd out what these two variables
contain, invoke the help system.
The help entry states that these are the successive waiting times and
durations of eruptions for the Old Faithful Geyser in Yellowstone National
Park.
With scatterplots, it is necessary to choose one variable for the x-axis and
one for the y-axis. For this kind of exploratory analysis where no hypothesis
66
is being examined, it doesnt really matter which variable goes where, but
we chose waiting time for the horizontal axis and duration for the vertical.
The problem is that the plotting routines are expecting that the data
are a data frame, but Geyser is saved as a list. Notice that in the Object
Explorer, the icon is dierent from that of puro. To see the data type of a
data object, right-click on its icon and choose Properties. Under Class:,
you can see that Geyser has a type (or class) list. If you double-click on the
Geyser icon, it will not open a data sheet and display the data; a Report
window is opened instead. Getting around the problem is not direct but
can be done as follows.
2.20. Solutions
67
Now that the Geyser data have been saved as a data frame, we can
create our scatterplot as before.
The axis labels are not ideal, and the plot symbol is a bit hard to see on
the printed page, so we will change these features on our graph.
68
Choose OK
2.20. Solutions
69
70
Double-click on the line and change the Style, Color, and Weight,
as in Figure 2.54, and then choose OK
Use the Rectangle Tool to draw a box around the points on either
side of the reference (vertical) line. What do you notice? Click on
Select Tool to turn o Rectangle Tool.
Use the Comment Tool to label the rectangular boxes just drawn.
Click on Select Tool and then change the font, text, and scale as
desired. Experiment with the various fonts (choosing Apply, rather
than OK, simplies the experimentation process). Choose OK when
you are satised with the results.
Use the Arrow Tool to connect a label to its corresponding box
and click on Select Tool to end. Change the color of the arrow (and
arrowhead) to black and choose OK.
Save the graph sheet
If you drew and labeled in a manner similar to us, you should now have
a graph that looks something like the one we obtained in Figure 2.55.
The take-home message here is that if the waiting time is short (< 60
minutes), then the eruption should be fairly long (4 minutes or more). If,
however, the waiting time is longer (> 60 minutes), there is no saying how
long the duration might be. We leave it to the reader to write the group
means on the plot as well as a linear regression line. Saving the graph sheet
into a PowerPoint presentation was covered earlier.
2.20. Solutions
71
3
A First Session
The remainder of the book is devoted to the use of S-Plus via the command line. We will look at how to use S-Plus as a calculator for simple
and complex calculations. We will look at dierent data structures and how
to handle data exibly. We will generate graphs displaying data and mathematical functions, explore data sets in detail, and use statistical models.
Finally, we will import data in several ways, automatize tasks by writing
functions, and investigate many details that can help a lot in practice.
Our rst session begins with some general information on using S-Plus,
including how to start and quit and how to access the help system. This is
followed by some useful commands and how to do basic arithmetic. Logical
variables are covered, and the chapter ends with a review of the material
from the chapter, followed by hands-on exercises.
To greatly enhance your learning of S-Plus, you should be seated at
your computer while completing the exercises and also while reading each
chapter. The exercises oer a good opportunity to try out the commands
yourself and get acquainted with the system.
74
3. A First Session
3.1.1
The very rst thing you need to know is how to start S-Plus. If you have
S-Plus on Windows, all you have to do is double-click on the S-Plus icon
(or use the taskbar) and a new S-Plus window will be opened that should
look approximately like the one in Figure 2.1 (p. 10).
If youre running S-Plus under UNIX, you need to create an S-Plus
chapter. A chapter is basically a directory tree containing all objects you
are going to create: data, functions, and more. S-Plus should always be
started from a chapter. Log on to your UNIX system and create an S-Plus
directory,
mkdir S
move to the new directory,
cd S
initialize it as a new S-Plus chapter,
Splus CHAPTER
and start up S-Plus from there.
Splus
Alternatively, enter
Splus -g
to start S-Plus with a graphical user interface.
You dont have to repeat the creation and initialization of the S-Plus
chapter again. Just change to the directory and start up S-Plus next time.
Note You can start S-Plus without dening chapters and simply work in
the default chapter. The eect is that you will work in the same chapter for
all of your projects, leading to a confusion of the objects that you created
for dierent projects. We highly recommend having separate chapters for
each of the projects you work on.
The procedure for dening chapters is described above for UNIX and
described on page 23 for Windows.
In the Commands window, the S-Plus command prompt
>
(the greater than sign) appears and indicates that S-Plus is ready to
accept your commands.
Now that you have started S-Plus, you will, at some point, want to quit.
You can quit S-Plus for Windows by clicking on the X in the upper righthand corner of the window, double-clicking on the upper left-hand corner
of the window, or clicking once on File and once on Exit. The standard
Windows combination Alt-F4 works as well. Another alternative is to type
75
> q()
at the command prompt. The latter works under S-Plus for UNIX as well.
A quicker exit is to use Ctrl-D (to quit the S-Plus shell).
3.1.2
If you get stuck, dont panic. S-Plus has an online help system. To start
the help system, simply type help() at the S-Plus command prompt (or
help.start() under UNIX). The help system is designed such that you
do not have to know exactly what youre looking for when you access it.
It has a table of contents with commands categorized by function and
a search system that operates by typing in the command or subject on
which you want information. Based on the subject, rather than the specic
command, you can get information on your topic of interest. If you prefer
to use the mouse, the help system can like be started by clicking on Help
in the main menu or on the help button on the main toolbar. Depending
on the version of S-Plus being used, the help system might be set up
dierently. If you know the name of a specic S-Plus command, entering
help(commandname), or alternatively ?commandname, will open a new help
window with instructions on the functionality of commandname. If you are
using Windows, you have the option of starting the help system by doubleclicking on the Help icon in the S-Plus folder. This enables one to use the
help system without actually starting the entire system.
The help(commandname) form of accessing the help system is especially
useful when you know the name of a command but have forgotten the
specic options it has, the input it requires, or the output it produces. The
general help() command is useful if you dont know an actual S-Plus
command name but only a particular subject.
3.1.3
Before Beginning
76
3. A First Session
parentheses are omitted, the contents of the variable (if data) or the denition of the command will be printed. Try typing help with no parentheses,
and the internal commands associated with it will be printed on the screen.
It will sometimes be necessary to add a comment to either an S-Plus
command or the output from one. The comment will be denoted with the
number sign (#).
Note This convention of writing notes is to catch your eye and draw your
attention to something that is important a helpful hint, or an exception
to something just mentioned. The symbol is used to mark the end of a
Note.
3.2.1
Arithmetic Operators
77
Note Now that you understand the appearance of the counter in the
S-Plus output, it will be omitted from (most of) the rest of the examples
in the book for ease of presentation.
Practicing a little more with arithmetic operators, you see that you must
pay attention to the order of operators. Thus, the example
> 7+2*3
13
yields an answer of 13 because multiplication is evaluated before addition.
If you really wanted the 2 to be added to the 7 before multiplying by 3,
parentheses must be included to demarcate which will be evaluated rst.
Thus, in the example
> (7+2)*3
27
the parentheses separate the operation that has to be evaluated before
multiplying by 3. We present additional simple examples below to illustrate
the rules above.
> 12/2+4
10
> 12/(2+4)
2
> 32
9
> 2*32
18
These examples already show how S-Plus works; that is, commands are
entered and the response follows immediately. You do not have to write a
program before you start (but you may).
3.2.2
Assignments
78
3. A First Session
# remove x
Note All assigned variables are written to disk and stored in the current
working directory. This means that you can quit S-Plus at any time without having to remember to save your data, as it is done automatically.
79
When you restart S-Plus, all data from previous sessions are still intact
and ready to be used. How about trying it out now?
The previous note brings up the point that, after a while, you may begin
to forget the variable names you have already used. In this case, it is very
easy to use the objects function to get a list of all the variable names
currently being used in the data directory. If you used the command at
this point, you would nd the following:
> x <- 3
> objects()
"x"
The function has a few optional parameters whereby you can search a
dierent database (and look at all the S-Plus functions, for example) or
search for only a subset of variable names. It would be good practice with
the help system for you to nd out about these options on your own.
3.2.3
Up to this point, you have worked only with single numbers (called scalars).
Although it is possible to work in this fashion, it is often advantageous
to work with vectors, which are simply ordered collections of numbers.
Suppose you have the numbers 1.5, 2, and 2.5, and you want to work with
the square of each value. You could save each number into a variable and
square each variable, or you could put all three numbers into a vector and
square the vector. The latter is much quicker if you have an easy way to
combine the numbers into a vector.
The concatenate command (c) can be used to create vectors of any
length. You simply have to list the elements to be combined, separating
each element by a comma. To perform the simple task above, we create x
and square it.
> x <- c(1.5, 2, 2.5)
> x
1.5 2.0 2.5
> x2
2.25 4.00 6.25
The c command can be used with variables. If we forgot a number, it could
be added easily.
> x <- c(x, 3)
> x
1.5 2.0 2.5 3.0
Note The c command can be used with any type of data, like the character
values below.
80
3. A First Session
3.2.4
81
The best shorthand for the seq function is to use the colon (:). Simply
putting the lower value, a colon, and the upper value is equivalent to the
seq function with an increment of 1.
> b <- 1:10
> b
1 2 3 4 5 6 7 8 9 10
When using the colon operator, the lack of an increment is not a problem.
> 0:10/10
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Compare the output obtained here to that from the third example above.
Sequences can also be constructed in descending order as seen in the
following two examples.
> seq(5, 1, -1)
5 4 3 2 1
> 5:1
5 4 3 2 1
The seq function has an option that is sometimes useful. If you know how
long the sequence should be but dont know the exact upper value, use the
length option as below.
> seq(0, by=0.03, length=15)
Thus, this sequence starts at 0 and goes up in increments of 0.03 until
the vector contains 15 elements. The length option can be used to obtain
equally spaced intervals. Consider our example above, but suppose that we
want 15 values to be equally spaced between 0 and 1 (creating 16 intervals).
What would you do?
3.2.5
82
3. A First Session
the pattern, and then each element is replicated the corresponding number
of times.
> rep(1:3, 1:3)
1 2 2 3 3 3
Another interesting example, left to the reader to try and interpret, can be
constructed by invoking the rep function twice.
> rep(1:3, rep(4, 3))
Note Sometimes, the length of the desired vector is known rather than the
number of times a pattern should be replicated. In this case, the length
option can be used in place of the number of times option.
Suppose we want a vector with 10 elements using the pattern 1, 3, 2. To
do this, we would use the length option as in the following example.
> rep(c(1, 3, 2), length=10)
1 3 2 1 3 2 1 3 2 1
a <- 7
b <- 3
ab <- a*b
ab
21
83
Is the result what you expected? (What did you expect?) In any case,
Table 3.1 gives an example of using arithmetic operators on vectors.
Table 3.1. Arithmetic operations on vectors.
a+b
a-b
a*b
a/b
a**b
0
5
10
15
1
2
3
4
1
7
13
19
-1
3
7
11
0
10
30
60
0
2.5
3.33
3.75
0
25
1000
50625
84
3. A First Session
approach the problem above, we need to build vectors x and y so that they
contain our values of interest.
> x <- seq(2, 10, 2)
> y <- 1:5
> x
2 4 6 8 10
> y
1 2 3 4 5
We now create a vector z that evaluates the previous function f for each
pair of elements in the vectors x and y. Thus, the vector z which we
create will have the same number of elements as both x and y.
> z <- ((3*x2+2*y)/((x+y)*(x-y)))(0.5)
> z
2.160247 2.081666 2.054805 2.041241 2.033060
Bracket
( )
[ ]
{ }
Function
For function calls as in f(x), and to set priorities
as in 3*(4+2)
Index brackets as in x[3]
Block delimiter for grouping sequences of commands.
Example: if { ... } else { ... }
> x[1]
0
> x[2]
10
> x[3]
20
> x[4]
NA
85
# rst element of x
# second element of x
# third element of x
# doesnt exist
# indicates Not Available
The index reference that is enclosed within the square brackets can be a
vector itself, indicating that more than one value is desired.
> x[1:2]
0 10
> x[c(1, 3)]
0 20
This type of notation may also be used to exclude values. Suppose we want
all values except the rst. This is done in the same way as above, except
that the minus sign is placed before the index.
> x[-1]
10 20
> y <- 1:2
> x[-y]
20
> x[2] <- 100
> x
0 100 20
# use y as index
86
3. A First Session
Table 3.3. Symbols for logical operators.
Symbol
<
>
<=
>=
==
!=
Function
Less than
Greater than
Less than or equal to
Greater than or equal to
Equal to
Not equal to
> 3!=4
T
The two lines above show us that 3 is less than 4 and that 3 is not equal to
4. The exclamation point (!) is used to negate the equals sign, so ! = gives
us is not equal to. Now, consider using logical variables with vectors.
> x <- 1:3
> x < 3
T T F
We see that 1 and 2 are less than 3, but that 3 itself is not.
Note Logical values are coded 1 for TRUE and 0 for FALSE. As with most
other languages, S-Plus stores logical variables as 1/0 values with an
attached ag indicating that this variable is of type logical.
With the ag and 1/0 coding, it is possible to perform both logical
comparisons and numeric computation.
> x <- -3:3
> x<2
T T T T T F F
> sum(x<2)
5
In the example above, we have used both a logical comparison (x<2) and
a numerical computation (sum (x<2)). (Note that the sum function simply
returns the sum of the values given to it.) For the logical comparison, we
see a T=TRUE for each value (3, 2, 1, 0, and 1) that is less than 2 and
an F for each value (2 and 3) that is not less than 2. The sum function
operates on the 1/0 coding of the T/F values. After its internal translation,
the calculation really being performed is sum(1+1+1+ 1+1+0+0) = 5. We
have simply counted how many elements of x were less than 2.
87
88
3. A First Session
> height[weight<130]
60 61
Note Be careful when creating a subset of a variable if a negative value
is involved. Suppose you want to look at all values of x that are less than
1. The rst guess would be to use
> x[x<-1]
but the result here would be that x now contains the single value of 1.
Unintentionally, the assignment operator (<-) has been used. There are
two solutions to this problem: put a space between the < and the -, or
enclose the 1 in parentheses, as in (1).
3.6 Review
This review is intended to cement the ideas presented so far. If you feel
the ideas up to this point are elementary and you would be bored with a
review, by all means skip the review and move right to the exercises. If
you have any doubt about which to do, then you would prot from going
through the review. It has been structured as a detailed example using and
explaining the ideas from this chapter.
We begin with some examples of using the seq and rep functions, both
of which are quite useful for creating indexing and indicator variables.
> seq(1, 6, 1)
1 2 3 4 5 6
> seq(1, 6, 2)
1 3 5
> rep(1, 6)
1 1 1 1 1 1
> rep(6, 1)
6
> rep(1:3, 2)
1 2 3 1 2 3
> rep(1:3, rep(2, 3))
1 1 2 2 3 3
# 1 to 6, increment of 1
# 1 to 6, increment of 2
# repeat 1, 6 times
# repeat 6, 1 time
# repeat (1:3), twice
# repeat 1 twice,...
Here are more examples of the principles from our rst session using the
Geyser data set of S-Plus. The Geyser data set comes with S-Plus as
an example and contains data collected about the Old Faithful Geyser in
Yellowstone National Park. There are two variables: the waiting times (in
minutes) and the corresponding duration (also in minutes) of the eruption.
We can use the attach function to enable use of the data. We simply attach
a data set and then can directly access the components contained in it.
3.6. Review
89
> attach(geyser)
We are now ready to do some work and start by creating a vector with the
rst 10 waiting times.
> wait.first10 <- waiting[1:10]
> wait.first10
80 71 57 80 75 77 60 86 77 56
We follow that by creating another vector with the next ve waiting times.
> wait.next5 <- waiting[11:15]
> wait.next5
81 50 89 54 90
Concatenate the vector with the rst 10 waiting times with the vector
containing the next 5 waiting times to create a new vector with the rst 15
waiting times. It might not be the most exciting example, but it is a nice
example of vector concatenation.
> wait.first15 <- c(wait.first10, wait.next5)
> wait.first15
80 71 57 80 75 77 60 86 77 56 81 50 89 54 90
To get some practice with data editing, insert the value 100 at the beginning
of the wait.first10 vector.
> c(100, wait.first10)
100 80 71 57 80 75 77 60 86 77 56
Do the same type of thing by putting the value 100 between the vectors
wait.first10 and wait.next5.
> c(wait.first10, 100, wait.next5)
80 71 57 80 75 77 60 86 77 56 100 81 50 89 54 90
For a little practice with subsetting, we will create vectors containing the
waiting times less than 50 and the corresponding durations.
> short.wait <- waiting[waiting<50]
> short.wait.duration <- duration[waiting<50]
The length command is used to calculate how many elements a vector (or
other construct) contains.
> length(short.wait)
16
> length(short.wait.duration)
16
The mean is used to calculate the mean, or average, of the elements supplied
to it. Here, we calculate the mean values of several of the vectors we just
created. Note that the mean duration of the short waiting times is larger
than the mean for all durations.
90
3. A First Session
> mean(short.wait)
47.8125
> mean(waiting)
72.31438
> mean(short.wait.duration)
4.432292
> mean(duration)
3.460814
> short.wait.duration
4.600000 4.466667 4.333333 4.000000 4.600000
4.916667 4.583333 4.000000 4.000000 4.466667
4.450000 4.416667 4.983333 4.000000 4.333333
4.766667
Logical operators can be used to create subsets as we saw above, but
remember that a double equals sign (==) is used for the logical equal
to.
> short.wait.duration==4
F F F T F F F T T F F F F T F F
Replacing one value with another is something that you most often do.
Such replacements can easily be done in S-Plus by working on the lefthand side of the assignment statement (<-). All we have to do is set up the
subset of elements we want to change and use it as the index of the vector.
We replace all durations of 4 with 4.01 after creating a copy of the original
data.
> adj.short.wait.dur <- short.wait.duration
> adj.short.wait.dur[adj.short.wait.dur==4] <- 4.01
> adj.short.wait.dur
4.600000 4.466667 4.333333 4.010000 4.600000
4.916667 4.583333 4.010000 4.010000 4.466667
4.450000 4.416667 4.983333 4.010000 4.333333
4.766667
> detach(geyser)
When youre done with a data set, it is a good idea to detach it using the
detach function.
We have seen in this chapter how we can use just a few simple commands
and ideas to calculate some rather sophisticated results.
3.7. Exercises
91
3.7 Exercises
Exercise 3.1
Calculate the rst 50 powers of 2 (i.e., 2, 22, 222, and so on). Calculate
the squares of the integer numbers from 1 to 50. Which pairs are equal (i.e.,
which integer numbers fulll the condition 2n = n2 )? How many pairs are
there?
(Use S-Plus to solve all these questions!)
Exercise 3.2
Calculate the sine, cosine, and tangent for numbers ranging from 0 to 2
(with distance 0.1 between them). The constant is available in S-Plus
and is denoted by pi.
Remember that tan(x) = sin(x)/cos(x). Now calculate the dierence between tan(x) and sin(x)/cos(x) for the values above. Which values are
exactly equal? What is the maximum dierence? What is the cause of
the dierences?
Exercise 3.3
Use the S-Plus help routines (not the manuals) to nd out how to use
the functions round, trunc, floor, and ceiling and learn what they do.
Predict what each of these functions will give as an answer for the numbers
3.7 and +3.8. Use S-Plus to test your predictions.
92
3. A First Session
3.8 Solutions
Solution to Exercise 3.1
We rst create a vector (ints) containing the integers from 1 to 50.
> ints <- 1:50
We then create our vectors of interest by raising 2 to the power of our
integers and by raising our integers to the power 2.
> x <- 2ints
> x
2 4 8 16 32 64 128 256 512 1024 2048 ...
> y <- ints2
> y
1 4 9 16 25 36 49 64 81 100 121 ...
We create a T/F vector (being the same length as both x and y), which
contains a T when elements of x and y are equal and an F when they are
not equal.
> equal <- (x==y)
> equal
F T F T F ... F
Now we print the values that are common in both x and y.
> x[equal]
4 16
> ints[equal]
2 4
We can see which entries were the ones in common. (The second and the
fourth entries of x were equal to the second and fourth entries of y.)
The length function is used to determine how many elements are
contained in a vector.
> length(x[equal])
2
We can do the same calculation using the sum function on the logical vector
equal.
> sum(equal)
2
3.8. Solutions
93
The command below tells you which of the elements are truly equal to zero.
Notice that many of them are not zero, despite the fact that theoretically
they should be.
> diff==0
T F F F T T F F T F F F F F T F T T T T T
T F T T T F T T T T T T T T F T T T F T T
T F F T T F T T T T T T T T F T T F T T T
To try to investigate this strange result, we look at the following: the total
number of values we have, the number of values that are exactly equal to
zero, and the largest of the dierences (the absolute value of it).
> length(diff)
63
> sum(diff==0)
43
> max(abs(diff))
1.421085e-014
There were a total of 20 of the 63 elements that were not zero, but the
largest was only about 1 1014 . For most practical calculations, an error
of this magnitude is negligible. However, you should be aware of it in case
such precision is important. The magnitude of such errors may vary from
machine to machine.
94
3. A First Session
Function
Description
round
trunc
floor
ceiling
round(x)
trunc(x)
floor(x)
ceiling(x)
x=-3.7
x=3.8
-4
-3
-4
-3
4
3
3
4
The floor and ceiling functions are a little confusing if you have negative numbers because the next lower and next higher integers are the
reverse of what you might expect. The next lower integer from 3.7 is 4,
and the next higher integer from 3.7 is 3.
4
A Second Session
Some of the most basic functions available in S-Plus were presented in the
previous chapter. It is time to move on to more advanced data structures
that will allow easier completion of necessary tasks. We begin with matrices
and then branch out into more specialized structures such as data frames
and lists. Subsetting by index value and missing values are introduced, and
we close with a few new applications and a review of the material covered
in the chapter.
96
4. A Second Session
We rst consider the matrix type, as this is one of the most important
types in mathematics and statistics and has the same rectangular layout
as data frames but only one type of data, either numeric or character.
4.1.1
Matrices
97
Try to guess what the following commands produce and then try them out.
>
>
>
>
>
x <- 3:8
matrix(x,
matrix(x,
matrix(x,
matrix(x,
3, 2)
ncol=2)
ncol=3)
ncol=3, byrow=T)
To further help you with matrices and their uses, lets build a more
instructive example matrix. Suppose we have data on the gross domestic
product (GDP, in billions of US$), population (in millions), and ination
(consumer price ination from 2000) for several European countries.1
> data <- c(208, 8, 2.4, 1432, 61, 1.7, 2112, 82, 2.0)
> country.data <- matrix(data, nrow=3, byrow=T)
> country.data
208
8 2.4
1432 61 1.7
2112 82 2.0
Our matrix is not terribly useful to us in this state because we dont know
which column corresponds to which variable or which row pertains to which
country. To make our lives easier, we can give names to the rows and/or
columns by using the dimnames function. This is a useful way to remember
what is stored in which elds. Dimnames are used to obtain a better display
and can also be used to access eld entries.
Since a matrix has two dimensions, lets say n rows and p columns, the
two vectors containing the dimnames that need to be attached to the matrix
should have n and p elements, respectively. Each element is a character
string describing the corresponding contents.
Example 4.1. Attaching dimnames to a matrix
In the following code, we assign row and column headings, using dimnames,
to a matrix.
> dimnames(country.data)
# NULL means empty
NULL
> dim(country.data)
3 3
> countries <- c("Austria", "France", "Germany")
> variables <- c("GDP", "Pop", "Inflation")
> dimnames(country.data) <- list(countries, NULL)
> country.data
Austria
208
8 2.4
France 1432 61 1.7
Germany 2112 82 2.0
1 The
data are taken from The Economist Pocket World in Figures, 2002 Edition.
98
4. A Second Session
99
100
4. A Second Session
matrix(0:5, 2, 3)
matrix(seq(0, 10, 2), 2, 3)
6
9
12
15
In S-Plus, there is one big exception to the rule of having matrices dimensions match. If you have a single number (scalar), it can be added to,
subtracted from, multiplied by, or divided into a matrix of any dimension.
Take the matrix A above, and dene a scalar, d, to be any number. Try
adding d+A. Is the result what you expected? Now, try A/d. Given what
you saw with the addition, did the division give you what you expected?
Matrix multiplication is more complicated and is not elementwise. The
S-Plus syntax to multiply two matrices is as follows:
> A %% B
The trick to matrix multiplication is that the number of columns of the
rst matrix has to be equal to the number of rows of the second. Hence, in
the example above, if matrix A has two rows and three columns, matrix B
must have three rows but can have any number of columns.
Note Matrix multiplication can be done elementwise as well. To do this,
we simply use the normal multiplication sign (*).
> A*B
0
8
2 18
32
50
For the resulting matrix, the rst element of A is multiplied by the rst
element of B, and so on.
In general, division is not possible with matrices. However, a common
matrix computation is to take the inverse, which is possible in S-Plus
using the solve function. More advanced work with matrices may be possible with the singular value decomposition. This function is appropriately
named svd in S-Plus.
Recall that column vectors may be multiplied (or divided), even if they
both come from the same matrix. From our example with GDP, we can
divide GDP by the population in the following way:
> country.data[, "GDP"]/country.data[, "Pop"]
Austria
France Germany
26 23.47541 25.7561
Mathematical operations on vectors work in an elementwise fashion, unlike
matrix multiplication.
101
Merging Matrices
Frequently, you may discover that you have forgotten a row or column (or
several) and would like to add it at a later stage. To add rows or columns
to matrices, simply create a new matrix containing the missing data and
merge it to the existing matrix. To add extra rows, use the rbind function,
and to add extra columns, use the cbind function.
Suppose we forgot the variable, area (1000 square kilometers), for the
countries in our example. We could add this information to our matrix in
the following way:
> Area <- c(84, 544, 358)
> country.data <- cbind(country.data, Area)
> country.data
GDP Pop Inflation Area
Austria
208
8
2.4
84
France 1432
61
1.7
544
Germany 2112
82
2.0
358
Notice that if you create a (column) vector, you can add it with no special manipulation. Also, if you give the vector the appropriate name, the
dimnames are automatically updated as desired.
You add a row in exactly the same manner as with column vectors, except
that you use the rbind function instead of cbind. To demonstrate how this
works, we will add a row of data for Switzerland.
> Switzerland <- c(259, 7, 1.6, 41)
> country.data <- rbind(country.data, Switzerland)
> country.data
GDP Pop Inflation Area
Austria
208
8
2.4
84
France 1432
61
1.7
544
Germany 2112
82
2.0
358
Switzerland
259
7
1.6
41
You can add multiple rows/columns in the same manner, but pay careful
attention to the dimensions of both the existing matrix and the one to be
added.
4.1.2
Arrays
In S-Plus, an array is a data construct that can be thought of as a multidimensional matrix (up to eight dimensions). It is dicult to conceive of
anything beyond three dimensions, but arrays may sometimes be useful to
hold multiple two- or three-dimensional matrices. One nice feature of this
construct is that the rules and procedures introduced for matrices hold for
arrays as well.
102
4. A Second Session
[,2]
16
17
18
Notice that S-Plus prints one matrix for each level of the third dimension
(highest dimension) of the array. The three-dimensional array is shown in
two-dimensional slices. The matrix above of dimension 342 is shown in
two slices, each of dimension 34.
Row and Column Functions
Due to the matrix layout of data that one commonly encounters, it is
often required to perform certain functions on each row or on each column
103
of the data. The most commonly used functions on rows and/or columns
have been made into functions in S-Plus. The operations of mean, sum,
variance, and standard deviation can be calculated for each column of a
data matrix or for each row (not the standard deviation). The names of
the functions are self-explanatory and are denoted by; colMeans, colSums,
colVars, colStdevs, rowMeans, rowSums, and rowVars. The simplest way
to see what the functions do is to look at a simple example. We will use
the country.data data set that we have already created and compute the
mean of each of the variables (columns).
> colMeans(country.data)
GDP Pop Inflation
Area
1002.75 39.5
1.925 256.75
We see that the mean population of our four countries is 38.5 million.
Care must be taken about the interpretation of the output from these
functions. If we had used the function rowMeans instead of colMeans, we
would have obtained results, but what is the mean of Austria? In this
example, it is the mean of the GDP, population, ination, and area of
Austria and has little meaning. In other situations, the row functions are
appropriate.
In the next subsection, we will explore a more general way of using any
function and applying it to either the rows or columns.
Apply
The apply function oers a very elegant way of handling arrays and matrices. It works by successively applying the function of your choice to each
row (rst dimension), each column (second dimension), or each level of a
higher dimension. The syntax is
> apply(data, dim, function, ...)
data is the name of your matrix or array, and function is the name of any
S-Plus function that you want to apply to your data. For a matrix of
two dimensions, the option dim can take the integer value of 1 or 2 to
refer to the rows or columns, respectively. The option, ..., can be lled in
with options to be passed on to the function being specied. We can use
such a function to compute the maximum value of each variable from our
European Countries example.
Example 4.3. Using apply on arrays
Here, we use the apply function to calculate the maximum of each column
in our country data set.
> apply(country.data, 2, max)
GDP Pop Inflation Area
2112
82
2.4
544
104
4. A Second Session
The max function simply returns the maximum value of the values supplied
to it. Remember that any function can be given to apply so that you can
use it for many purposes.
Note For simple functions like min and max, direct functions exist in
S-Plus that can be used to replace apply. In the example above, the column maximums could have been calculated using the equivalent expression
below.
> colMaxs(country.data)
Other common functions of this nature are colMins, colMaxs, colRanges,
rowMins, rowMaxs, and rowRanges.
These specialized functions are typically faster than using the more general method involving the apply function. However, apply is much more
general as it can be used with any function including your own.
4.1.3
Data Frames
One of the most used data constructs in S-Plus is the data frame. Its
popularity stems from its exibility, as it allows you to bind data vectors of
dierent types together, such that the data can be accessed like a matrix
(similar to data sets in SAS). The syntax for creating a data frame is as
follows:
> data.frame(data1, data2, ...)
Here, the ... notation simply shows the fact that you can specify as many
data sets as you want.
This functionality ts in nicely with the next step in our European
Countries example. You may have noticed a major dierence between the
countries (other than language), which we will indicate with a new character variable - namely, all the countries belong to the European Union
(EU) except Switzerland. We could try to use the principles from the previous section on merging matrices, but the dierent data types cause the
following problem:
> EU <- c("EU", "EU", "EU", "non-EU")
> country.data1 <- cbind(country.data, EU)
> country.data1
GDP Pop Inflation Area
Austria "208" "8" "2.4"
"84"
France "1432" "61" "1.7"
"544"
Germany "2112" "82" "2.0"
"358"
Switzerland "259" "7" "1.6"
"41"
> apply(country.data1, 2, max)
EU
"EU"
"EU"
"EU"
"non-EU"
105
106
4. A Second Session
> is.data.frame(barley)
T
> sapply(barley, class)
yield
variety
year
site
"numeric" "ordered" "ordered" "ordered"
The individual variables in the example above are then referenced by
one of three dierent notations; country.frame$GDP, country.frame[,
"GDP"] (as with matrices), or country.frame["GDP"]. These forms of referencing are rather long and tedious, especially if your typing is not up to
par. However, if you know that you are going to be working with a particular data frame, there is some relief in the form of the attach command.
If you attach a data frame, you can work with the simple variable names
(without the country.frame$) until the data frame is detached with the
detach function.
Example 4.5. Attaching and detaching a data frame
When attaching a data frame, you must be aware of variables already
existing in your session that might conict with the name of a variable
from an attached data frame. We will see this problem below.
> attach(country.frame)
> Pop
Austria France Germany Switzerland
8
61
82
7
> EU
"EU" "EU" "EU" "non-EU"
Note that the double quotes are included when this variable is printed.
This is the local version of the variable that was originally created to add
to the data frame. To see the data frame version, detach, remove the local
version, and attach again.
>
>
>
>
detach()
rm(EU)
attach(country.frame)
EU
EU
EU
EU
non-EU
> detach()
> EU
Problem: Object "EU" not found
Use traceback() to see the call stack
Note The local version of a variable always has preference rather than a
version from the attach function.
107
In the example above, the local version of the variable EU was printed
after the attach was made. We know this because double quotes were
printed, explicitly showing us that the original character vector was used.
After we deleted the local version and reattached, the double quotes no
longer appear. The data frame version of the variable EU doesnt contain
double quotes because the data frame itself recognizes that the variable is
of type character and dispenses with them.
The attach function is much like the set command in a SAS data step.
4.1.4
Lists
Lists are a construct that allow you to tie together related data that do not
share the same structure. Recall that with data frames, the data could be
of dierent types but had to have the same structure (i.e., have the same
length).
Consider again the European Countries example. Suppose we want to
record all the European countries in which German is an ocial or major
language and then associate this information with the rest of our European data. First, you need to know that there are six European countries
in which German is an ocial or major language: Austria, Belgium, Germany, Liechtenstein, Luxemburg, and Switzerland. We now continue the
European Countries example by showing how to create and work with lists.
Example 4.6. Creating lists
We start by creating a vector to add to our European countries data to
make a list.
> Ger.Lang <- c("Austria", "Belgium", "Germany",
"Liechtenstein", "Luxemburg", "Switzerland")
> country.list <- list(country.frame, Ger.Lang)
> country.list
[[1]]:
GDP
Pop Inflation
Area
Austria
" 208" " 8"
"2.4" " 84"
France
"1432" "61"
"1.7" "544"
Germany
"2112" "82"
"2.0" "358"
Switzerland " 259" " 7"
"1.6" " 41"
[[2]]:
"Austria"
"Liechtenstein"
"Belgium"
"Luxemburg"
"Germany"
"Switzerland"
EU
"
EU"
"
EU"
"
EU"
"non-EU"
108
4. A Second Session
109
The principle of dening your own functions is the same as above and can
most easily be seen through a simple example of calculating the standard
deviation, which is the square root of the variance.
Example 4.7. Creating your own function
> SD <- function(x) { sqrt(var(x)) }
> y <- 1:10
> SD(y)
3.02765
Here, we see that the function has one parameter (x) that is used to
dene the data for which we want to calculate the standard deviation. The
actual calculations of the function are contained within curly brackets and
can be extended over several lines with several commands. The result of the
function that should be returned to the user should be contained within the
return function. By default, the result of the calculation on the last line
of the function will be returned (made visible) to the user. We could have
dened the contents of the function as either return(sqrt(var(x))) or
y <- sqrt(var(x)); return(y). We will be covering functions in much
more detail in Section 9.2.
110
4. A Second Session
returned as the maximum of your vector would warn you that something
strange happened in your calculations. In other cases though (like ours),
you dont care about the missing value because you put it there to represent
unknown information. This problem can be circumvented by determining
which values are not missing and keeping only the valid values to perform
the desired mathematical function. We use the attach function here to
avoid some typing.
>
>
>
>
>
>
>
>
>
rm(GDP)
attach(country.frame)
gdp.na <- is.na(GDP)
gdp.na
Austria France Germany Switzerland
T
F
F
F
max(GDP[!gdp.na])
2112
max(GDP[-gdp.na])
2112
mean(GDP[!is.na(GDP)])
1267.667
mean(GDP,na.rm=T)
1267.667
detach()
We have used the is.na function to determine which GDP values are missing (only the one for Austria). Then, using index subsetting to select certain
elements, we have excluded the missing values rst with the exclamation
mark (used for negation) and then with the minus sign. (A more in-depth
look at missing values will be presented in Section 7.5.)
111
112
4. A Second Session
> geyser.matrix[min.wait, ]
waiting duration
43 4.333333
The conclusion is that the minimum waiting time (43 minutes) does not
correspond to the minimum duration (0.83 minutes). In fact, it looks like
the minimum waiting time might correspond to the longest duration.
To look at this strange relationship between waiting times and durations, you will construct two small matrices: m.short.wait containing the
shortest waiting times and corresponding durations, and m.long.wait containing the longest waiting times and corresponding durations. We create
a logical vector of waiting times less than 48.
> short.wait <- waiting<48
Let us now use the T/F vector to select specic rows from the matrix.
> m.short.wait <- geyser.matrix[short.wait, ]
> dim(m.short.wait)
4 2
> m.short.wait
waiting duration
43 4.333333
45 4.416667
47 4.983333
47 4.766667
> m.long.wait <- geyser.matrix[waiting>95, ]
> dim(m.long.wait)
4 2
> m.long.wait
waiting duration
108 1.950000
96 1.833333
98 1.816667
96 1.800000
You can place matrices on top of one another with the rbind function.
> join <- rbind(m.long.wait, m.short.wait)
> join
waiting duration
108 1.950000
96 1.833333
98 1.816667
96 1.800000
43 4.333333
45 4.416667
47 4.983333
47 4.766667
113
Notice that the items in the matrix join are not sorted. Values in a single
vector may be sorted directly using the sort function, as in
> sort(join[, 1])
43 45 47 47 96 96 98 108
However, if you want to sort on more than one variable, it is best to use
the order function to get a permutation of the indices that will give the
desired ordering when used. The following is a short example of this.
> order(join[, 1], join[, 2])
5 6 8 7 4 2 3 1
> join[order(join[, 1], join[, 2]), ]
waiting duration
43 4.333333
45 4.416667
47 4.766667
47 4.983333
96 1.800000
96 1.833333
98 1.816667
108 1.950000
We specied the waiting times rst and then the durations to the order
function so that our new list would be sorted rst by waiting time and
then, in case of a tie, by duration. The third and fourth entries both have a
waiting time of 47 minutes. The row with the shorter duration comes rst
in the list. The variable ord contains the ways in which the rows should be
ordered, and the join function puts them in this order.
114
4. A Second Session
4.6 Exercises
Exercise 4.1
Work with the European Countries data used throughout most of this
chapter.
Compute the GDP per capita and add it to the data frame.
Compute the maximum of each of the variables in the data frame.
Which country has the largest GDP per capita?
Assume that the GDP is not known for France (set it to missing).
Recalculate the GDP per capita and the maximum value for each
variable.
What has now happened to the maximum of the GDP variable? Suppress the row containing the missing value and again recalculate the
maximum of each variable.
Look carefully at the maximum area. To which country does this
area belong? Is this really the largest country in our small sample?
What happened when we excluded the row with the missing GDP for
France? Determine how to correctly adjust for the missing GDP and
still get correct results for the maximum values. (Hint: Use the help
facility.)
Exercise 4.2
This exercise is really a continuation of the previous one. Once you have
added the variable of GDP per capita to the data frame from the European Countries example, sort the data frame according to GDP per capita.
Notice that it puts the smallest GDP per capita rst and the largest at the
bottom. Use the help utility to change the sort so that the largest GDP
per capita is rst and the smallest is at the bottom (reverse sort).
Exercise 4.3
Create a list of European countries whose ocial or major language is
French. Add this vector to the list from the European Countries example.
Use S-Plus to produce a list of countries where both German and French
are either an ocial or a major language.
To do this, you will need to create a list where either German or French is
spoken and nd the countries that are duplicated in this list. (Hint 1: Use
4.6. Exercises
115
the help utility.) What are the physical characterics of these countries?
(Hint 2: The French-speaking countries in Europe are Belgium, France,
Luxemburg, and Switzerland.)
Exercise 4.4
Write a function with two parameters that oers a solution for the famous
Y2K problem. The rst parameter will be a two-digit year and the second
will be a two-digit cuto. The cuto works such that if 30 is specied, then
all years below (or equal to) 30 are considered to be in the 21st century
(20xx) and years above 30 are considered to be in the 20th century (19xx).
You will want to look at the syntax of the ifelse function.
Exercise 4.5
Continue the example on merging data shown in Section 4.4. In particular,
merge the phone.numbers and addresses data frames such that:
All entries in both data frames appear
Only entries in the phone.numbers data frame appear
Only entries in the addresses data frame appear
Not included in the solution is the extension that you should rename the
names variable in the addresses data frame to friends and then perform
a simple merge.
Exercise 4.6
For some extra practice with merging and data manipulation, use the
S-Plus datasets car.test.frame and fuel.frame to create a new data
frame where the merge is done by car name and the common variables are
not repeated (except Mileage, but use the suffixes option).
Summarize the mileage for each type of vehicle. Create a dataset that
contains the vehicle with the highest mileage for each vehicle type.
116
4. A Second Session
4.7 Solutions
Solution to Exercise 4.1
We begin by creating a new data frame by adding a new variable to an
existing data frame. Notice that we have used the round function to specify
that one place to the right of the decimal point will be printed.
> country.frame1 <- data.frame(country.frame, GDPcap=
round(country.frame$GDP/country.frame$Pop, 1))
> country.frame1
GDP Pop Inflation Area
EU GDPcap
Austria 208
8
2.4
84
EU
26.0
France 1432 61
1.7 544
EU
23.5
Germany 2112 82
2.0 358
EU
25.8
Switzerland 259
7
1.6
41 non-EU
37.0
Use the apply and max functions to calculate the maximum of each of the
variables (columns). Remember that the fth column must be excluded
from the calculations because it is not numeric.
> apply(country.frame1[, -5], 2, max)
GDP Pop Inflation Area GDPcap
2112
82
2.4
544
37
We set the GDP and GDPcap for France equal to NA and recalculate the
maximum values.
> country.frame1[2, c(1,
> apply(country.frame1[,
GDP Pop Inflation
NA
82
2.4
6)] <- NA
-5], 2, max)
Area GDPcap
544
NA
Now we determine which rows contain a missing value and exclude them
from the new maximum calculations.
> gdp.na <- is.na(country.frame1$GDP)
> apply(country.frame1[!gdp.na, -5], 2, max)
GDP Pop Inflation Area GDPcap
2112
82
2.4
358
37
The problem we have encountered here is that, by searching for rows with
missing values, we have excluded the entire row. In our example, this corresponds to all data on France. Hence, when we calculate the maximums, we
nd the largest area to be 358, which is that of Germany. What we really
want to do is to exclude only that one missing value and not the whole
row. The simple solution is found by looking at the help documentation for
the max function. We nd that the additional parameter na.rm excludes
individual missing values from analyses.
4.7. Solutions
117
> help(max)
> apply(country.frame1[, -5], 2, max, na.rm=T)
GDP Pop Inflation Area GDPcap
2112
82
2.4
544
37
118
4. A Second Session
"Belgium"
"Luxemburg"
"France"
"Luxemburg"
"Germany"
"Switzerland"
"Switzerland"
There are several ways of nding the countries where both French and
German are spoken. The solution here is not unique. We proceed by putting
the second and third components of the list into a new vector and using the
duplicated function to nd which country names have been duplicated.
We found three countries where the two languages are spoken.
> Both.frame <- c(country.list1[[2]], country.list1[[3]])
> dup <- Both.frame[duplicated(Both.frame)]
> dup
"Belgium" "Luxemburg" "Switzerland"
> country.list1[[1]][dup, ]
GDP Pop Inflation Area
EU
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
Switzerland 259
7
1.6
41 non-EU
Notice that the rst two lines of our list containing the characteristics of
the French- and German-speaking countries are lled with NAs. This is
simply because we dont have any country characteristics for Belgium and
Luxemburg.
4.7. Solutions
119
120
4. A Second Session
4.7. Solutions
3
Buick Century 4 13150
4 Buick Le Sabre V6 16145
5
Buick Skylark 4 10565
USA
USA
USA
121
3
3
2
1
2
3
4
5
MileageCar
Type Weight Disp. HP MileageFuel
20 Medium
3265
163 160
20
27 Compact
2670
121 108
27
21 Medium
2880
151 110
21
23
Large
3325
231 165
23
23 Compact
2640
151 110
23
1
2
3
4
5
Fuel
5.000000
3.703704
4.761905
4.347826
4.347826
To reduce the length of output, only the rst ve lines have been shown.
A useful tool for the next part of the exercise is to sort the data according
to vehicle type and mileage. In that way, it is possible to compare the sorted
data to the summary statistics to see if you are on track.
> car.fuel.sort <- car.fuel[order(car.fuel[,"Type"],
car.fuel[,"MileageCar"]), 1:11]
To get the summary statistics of the mileage by vehicle type, we need to
use the by function (output shown for the rst two vehicle types only).
> by(car.fuel.sort$MileageCar, car.fuel.sort$Type, summary)
INDICES:Compact
x
Min.:21.00
1st Qu.:23.00
Median:24.00
Mean:24.13
3rd Qu.:25.50
Max.:27.00
------------------------------------------------------INDICES:Large
x
Min.:18.00
1st Qu.:19.00
Median:20.00
Mean:20.33
3rd Qu.:21.50
Max.:23.00
122
4. A Second Session
In order to get a dataset with the highest mileage for each vehicle type, we
need to sort the original dataset in descending order of mileage and keep
the rows that are not duplicates of the vehicle type.
> x <- car.fuel[rev(order(car.fuel$MileageFuel)), ]
> high.mpg <- x[!duplicated(x$Type), ]
> high.mpg
Row.names Price
Country Reliability
19
Ford Festiva 4 6319
Korea
4
28 Honda Civic CRX Si 4 9410
Japan
5
55
Toyota Camry 4 11588 Japan/USA
5
57
Toyota Cressida 6 21498
Japan
3
4
Buick Le Sabre V6 16145
USA
3
40
Nissan Axxess 4 13949
Japan
NA
19
28
55
57
4
40
MileageCar
Type Weight Disp. HP MileageFuel
37
Small
1845
81 63
37
33 Sporty
2170
97 108
33
27 Compact
2920
122 115
27
23 Medium
3480
180 190
23
23
Large
3325
231 165
23
20
Van
3185
146 138
20
19
28
55
57
4
40
Fuel
2.702703
3.030303
3.703704
4.347826
4.347826
5.000000
The commands above have simply looked at the sorted dataset and taken
the rst rows where a vehicle type appears. If there are ties of mileage for
the same vehicle type, they will not be taken into account. To see these,
we can either look at the original sort above, or we can use the split and
lapply functions as shown below.
> high.mpg.list <- lapply(split(car.fuel, car.fuel$Type),
function(x) x[x$MileageFuel == max(x$MileageFuel), ])
$Compact:
Row.names Price
Country Reliability
2
Audi 80 4 18900
Germany
NA
55 Toyota Camry 4 11588 Japan/USA
5
MileageCar
Type Weight Disp. HP MileageFuel
27 Compact
2670
121 108
27
4.7. Solutions
55
27 Compact
2920
122 115
123
27
Fuel
2 3.703704
55 3.703704
$Large:
Row.names Price Country Reliability
4 Buick Le Sabre V6 16145
USA
3
Fuel
4 4.347826
$Medium:
Row.names Price Country Reliability
25 Ford Thunderbird V6 14980
USA
1
30
Hyundai Sonata 4 9999
Korea
NA
57
Toyota Cressida 6 21498
Japan
3
25
30
57
MileageCar
Type Weight Disp. HP MileageFuel
23 Medium
3610
232 140
23
23 Medium
2885
143 110
23
23 Medium
3480
180 190
23
Fuel
25 4.347826
30 4.347826
57 4.347826
$Small:
Row.names Price Country Reliability
19 Ford Festiva 4 6319
Korea
4
19
Fuel
19 2.702703
$Sporty:
Row.names Price Country Reliability
28 Honda Civic CRX Si 4 9410
Japan
5
124
4. A Second Session
28
MileageCar
Type Weight Disp. HP MileageFuel
33 Sporty
2170
97 108
33
Fuel
28 3.030303
$Van:
Row.names Price Country Reliability
38 Mitsubishi Wagon 4 14929
Japan
NA
40
Nissan Axxess 4 13949
Japan
NA
38
40
This output shows the various ties found in the dataset. We can put these
pieces of the list together to make one dataset by using the command below.
> do.call("rbind", high.mpg.list)
The function do.call puts together a call to the function specied, rbind
in this case, and adds all elements of the list as arguments to the function
call.
5
Graphics
Graphs are one of S-Plus strongest capabilities and most attractive features. This section gives an overview of how to create graphs and how to
use the dierent parameters to modify elements and layouts of graphs.
There are many options and functions available for working with graphs
and graphical elements. The most important commands and their functionality are treated, and many other possibilities are mentioned. If you
are interested in more details, this chapter will enable you to quickly nd
and understand the relevant information in the manuals provided with
S-Plus or from the online help.
Windows users can switch back and forth between the graphical user
interface and the command line. Even if the Windows GUI is used, all
actions are mapped to commands, as has been explained earlier.
126
5. Graphics
127
Under UNIX, you typically create a PostScript le and send it to a PostScript printer. The command to use is
> postscript("lename")
Of course, a standard way of printing graphs is to create the graph on the
screen and, once it is nished, to click the Print button in the menu bar.
The graph is automatically converted into the printer format and sent o
to the printer. Knowing about the printer device is useful, especially if a
series of graphs is to be created and sent to the printer. Instead of creating
a graph and clicking the Print button, creating the next graph, printing
it, and so forth, all graphs can be sent to the printer directly.
We list the most common devices in Table 5.1.
Table 5.1. The most common output devices.
System
Device function
UNIX
motif
postscript
java.graph
pdf.graph
wmf.graph
hpgl
hplj
Windows graphsheet
Description
Motif graphics window
PostScript le
Java graphics window
PDF (Adobe Portable Document Format)
WMF (Windows Metale) le
HPGL le
HP LaserJet le
Graphics window or creation of a le of
specied format (JPEG, GIF, WMF, etc.)
using graphsheet(file="lename",
format="format")
128
5. Graphics
Note Do not forget to close the device. If you quit S-Plus, this is done
for you. Closing the device is especially important if you are in the process
of creating a graphics le or if you are sending graphics commands directly
to a printer. Only if you close the device will the last page be ejected to the
printer; otherwise, the eject page command will not be sent to the printer
or added to the le, respectively. The reason for this is that S-Plus is still
ready to receive graphics commands for the same page.
Note Typically, there is a menu bar, which includes the Print button.
If you click on this button, S-Plus generates a le from the graph on
the screen and sends it to the system printer. Make sure that the correct
printer is installed. On UNIX systems, S-Plus keeps a le with the name
ps.out.xxxx.ps in the current directory (it can be congured to remove it
after printing though).
Options
Device parameters can be supplied with the function call, such as specifying
orientation (portrait or landscape) or the height and width of the plot.
All the devices have several options that are set to a default value. You
can override the default values by supplying the parameter to the function.
For example, if the PostScript graphics should be in portrait mode (not
horizontal but vertical) and stored in a le smallgraph.ps, you need to
enter
> postscript(file="smallgraph.ps", horizontal=F)
5.2.1
S-Plus oers the possibility of working with more than one graphics device
at a time. For interactive data analysis and presentations, it is very useful to
have two or more graphics windows on the screen and to show interesting
details by switching between them. Table 5.2 shows an overview of the
most commonly used functions and provides a brief explanation of their
functionality.
The functions oer a great exibility by having several graphics windows
on the screen while analyzing a data set. They also oer a nice toolkit for
creating an impressive demonstration of self-developed applications.
129
Command
Eect
dev.list()
dev.cur()
dev.next()
dev.prev()
dev.set(n)
dev.copy(device)
dev.copy(which=n)
dev.off()
dev.print()
graphics.off()
dev.ask(ask=T)
5.3.1
130
5. Graphics
You can change the character for the points by specifying the parameter
pch. For example,
> plot(x, y, pch="P")
displays the data points with the character P.
5.3.2
If you want to display data points in a graph, there are many ways of doing
it. You might display the data as dots, stars, or circles, simply connect
them by a line, or both. The data might be displayed as height bars or
made invisible to set up the axes only and add more elements later. For
this purpose, the function plot has a parameter type. Figure 5.1 shows
examples of the most commonly used type options. Options for the type
mode in which to plot are summarized in Table 5.3. The statements used
to generate the example graph are listed for reference.
Table 5.3. Options for the parameter type in the plot command.
type option
Plot style
type="p"
type="l"
type="b"
type="h"
type="o"
type="s"
type="n"
x <- -5:5
# generate 5 4 ... 3 4 5
y <- x2
# y equals x squared
par(mfrow=c(3, 2))
# set a multiple gure screen
plot(x, y)
# and create the graphs
plot(x, y, type = "l")
# in dierent styles
plot(x, y, type = "b")
plot(x, y, type = "h")
plot(x, y, type = "o")
plot(x, y, type = "n")
mtext("Different options for the plot parameter type",
side=3, outer=T, line=-1)
131
0 5 10 15 20 25
0
x
4
0 5 10 15 20 25
0
x
0
x
0
x
0
x
0
x
0 5 10 15 20 25
0 5 10 15 20 25
0 5 10 15 20 25
0 5 10 15 20 25
Figure 5.1. Examples of dierent options for the type parameter of the plot
function.
5.3.3
The type option is only one of many that can be set to modify a gures
display. You can change the labels for the axes, omit the axes, set the limits
of an axis, change colors, and just about anything you can imagine.
Here is an example of how several options can be combined into a single
call to the plot function. The resulting graph is displayed in Figure 5.2.
> x <- seq(-5, 5, 1)
> y <- x2
> plot(x, y, pch="X", main="Main Title", sub="Subtitle",
xlab="X Axis Label", ylab="Y Axis Label", xlim=c(-8, 8),
type="o", lty=2)
Figure 5.2 shows most of the main elements of a graph to give you an
idea of what can be changed. Table 5.4 shows an overview of some of the
more common graphics options that may be modied.
132
5. Graphics
20
25
Main Title
Y Axis Label
10
15
X
X
X
5
0
X Axis Label
Subtitle
Parameter
Description
type=
axes=T / axes=F
main="Title"
sub="Subtitle"
xlab="x axis label"
ylab="y axis label"
xlim=c(xmin, xmax)
ylim=c(ymin, ymax)
pch="*"
lwd=1
lty=1
col=1
Note: This is not a complete listing of the parameters available. For all
details, see the online help function by entering help(par).
133
0.75
0.50
x
0.25
0.0
x
0
sin(pi)=0
x
Pi
4
x
2 Pi
0.50
x
x
x
1.00
x
x
5.4.1
134
5. Graphics
Function
Description
abline(a, b)
arrows(x1, y1, x2, y2)
axes()
axis(n)
box()
points(x, y)
lines(x, y)
polygon(x, y)
segments(x1, y1, x2, y2)
text(xpos, ypos, text)
title("title", "subtitle")
Note Sometimes you will want to plot line segments. This is easy to do
if you know that S-Plus interrupts a line where a missing value (NA) is
encountered. The line does not connect from and to the missing value. We
135
will have a detailed look at missing values later. Alternatively, you can use
the segments function.
5.4.2
As abline is a very useful function for enhancing a graph with one or more
lines, we give some application examples in Table 5.6 and have, in fact,
already used it in Figure 5.3. The function abline is especially useful for
drawing gridlines. If you want to have the gridlines more in the background
of the picture, make them gray instead of black, such that they less disturb
the impression of the whole picture. For this purpose, you need determine
the color coding of the gray tones on your output device. Another approach
is to add dotted lines instead of solid ones. As usual, there are many ways
to achieve a result. You need to try out dierent possibilities to nd the
one most suitable for your application.
Table 5.6. Some examples of uses for abline.
Function
Description
abline(a,b)
abline(l)
abline(h=x)
abline(v=x)
5.4.3
136
5. Graphics
Table 5.7. Optional parameters for plotting axes.
Parameter
Description
Parameter
Description
side=n
at=x
labels=s
pos=y
las=m
137
Note that an axis typically consists of lines. Therefore, all settings for
lines, such as changing the thickness or the color, also apply to axes. For
example,
> axis(4, lwd=2, col=2)
adds an axis on the right-hand side twice as thick as the standard style,
and in color number 2.
5.4.4
Parameter
Description
138
5. Graphics
Sometimes, the text overlaps or touches the point on the graph just slightly.
In such a case, add an extra space to the beginning of the text. The optional
parameters of text and mtext oer more possibilities (see Table 5.9).
*
*
margin 2
margin 4
Plot
Figure
61
margin
?
The parameters referenced in Table 5.10 can be set by using the par
function, as in the following example where the outer margin is set to be
equal to 1 line on all sides, for all successive graphs.
> par(mar=c(1, 1, 1, 1))
A setting can be retrieved by entering
> par("mar")
139
Parameter
Description
fin=c(m,n)
pin=c(m,n)
mar=c(5,4,4,2)+0.1
mai=c(1.41,1.13,1.13,0.58)
oma=c(0,0,0,0)
omi=c(0,0,0,0)
plt=c(0.11,0.94,0.18,0.86)
usr
mfrow=c(m,n)
mfcol=c(m,n)
fty="style"
140
5. Graphics
5.6.1
If you are planning to create an advanced gure layout, you should read
the following sections and the next chapter on Trellis displays. If Trellis
displays fulll your needs, they can easily be composed into a multigure
graph of almost any layout. If you are already familiar with Trellis displays,
jump to Section 6.4.6 (page 165).
After having obtained an overview on what methods are available and
how they can be used, you can take your pick. Select the method that is
most convenient for getting the desired graph.
5.6.2
Matrices of Graphs
>
>
>
>
>
>
par(mfrow=c(2, 2))
x <- 1:100/100:1
plot(x)
plot(x, type="l")
hist(x)
boxplot(x)
141
# initialization
5.6.3
Multiple-Screen Graphs
Another way of creating a set of gures in the same graph is to use the
split.screen function. In its simplest use, it works like the par(mfrow)
command. One would enter
> split.screen(c(2, 2))
to generate a 2 2 matrix of gures. In comparison to the par(mfrow)
variant, splitting the screen provides the ability to hop around between
142
5. Graphics
areas. On the other hand, it does not automatically plot the next graph in
the next plot area but waits for explicit specication of another plot area.
> screen(2)
will activate the second screen in the display.
The matrix of screens can be lled either horizontally (which is the
default) or vertically by setting byrow=F in the call to split.screen.
It is possible to have a matrix of dierently sized graphs or even any
layout. If a new graph is started, its initial coordinates range from 0 to 1,
similar to 0100%). Each screen in the graph is dened by its lower left
and top right corner. The following command denes a layout of a 2 1
matrix, where the rst row stretches from 0 to 1 on the x-axis and from 0
to 3/4 on the y-axis. The second row stretches from 0 to 1 on the x-axis
and from 3/4 to 1 on the y-axis.
> split.screen(figs=matrix(c(0, 1, 0, 3/4, 0, 1, 3/4, 1),
2, 4, byrow=T))
Note that the matrix argument to figs must have four columns. Each row
denes the corners of one of the screens. The screen layout is closed by
using close.screen.
> close.screen(all=T)
5.6.4
After having opened a new graphics page, the coordinates of the plot are
set to (0,0) for the lower left corner and (1,1) for the top right corner. Let
us now plot the Geyser data set that comes with S-Plus.
We want to set up a gure such that a scatterplot of the data appears
as the main plot with the addition of a histogram of the x data on top of
the scatterplot and a boxplot of the y data to the right-hand side of it. The
layout we have in mind looks like the one in Figure 5.5.
The plot itself should have the lower left corner coordinates (0, 0) and
the top right corner coordinates (0.7, 0.7). Therefore, the gure on top of
the data plot has the lower left corner (0.0, 0.7), and the upper right corner
is (0.7, 1.0). For the vertical boxplot on the right side, we get the lower left
corner (0.7,0) and the upper right corner (1.0, 0.7).
By experimenting with the dierent sizes and the borders, we discover
that it is better to have the gures overlapping a little bit.
(0.7, 1)
143
(1, 1)
Histogram of x
(0, 0.7)
Graph of (x, y)
(0, 0)
Boxplot
of y
(0.7, 0)
(1, 0)
> frame()
> par(fig=c(0, 0.7, 0, 0.7))
> plot(geyser$waiting, geyser$duration, pch="*",
xlab="Waiting Time", ylab="Duration of Eruption")
> title("\nOld Faithful Geyser Data Set")1
> par(fig=c(0, 0.7, 0.65, 1))
> hist(geyser$waiting, xlab="")
> par(fig=c(0.65, 1, 0, 0.7))
> boxplot(geyser$duration)
Finally, we obtain Figure 5.6.
5. Graphics
0 20 40 60
144
40
60
80
100
5
4
3
50
60
70
80
Waiting Time
Duration of Eruption
2
3
4
90
100
110
5.7. Exercises
145
5.7 Exercises
Exercise 5.1
In a single gure, plot the functions sin, cos, and sin + cos with dierent
colors and line styles. Use 1000 points in the interval [10, 10] and label
the gure with a title, subtitle, and axis labels.
Exercise 5.2
Construct two vectors, x and y, such that the S-Plus command
> plot(x, y, type="l")
creates the following gures:
a) a rectangle or square
b) a circle
c) a spiral
Hint: Do not think of y as being a function of x (in a mathematical sense),
but think of how the trace of the gure can be created. For drawing circles
you might use the property that, for any x, a point (sin(x), cos(x)) lies on
the unit circle with origin (0, 0) and radius 1.
Exercise 5.3
We consider the so-called Lissajous gures. They are dened as
z(x) =
sin(ax)
z1
=
z2
sin(bx)
146
5. Graphics
5.8 Solutions
Solution to Exercise 5.1
What we need to create this graph is a sequence of x values, for which
we calculate the values sin(x) and cos(x). If several functions are to be
plotted within one picture, the order of plotting the gures is important.
The arguments to the plot function, usually the rst curve, determine
the boundaries of the gure. The following curves are then added to the
existing picture within the existing limits. If you want to see more of the
sin(x) + cos(x) curve than that which ts within the boundaries of sin(x),
extend the y limits by using the option ylim of the plot function or and
this is easier plot the curve sin(x) + cos(x) rst.
1.5
1.0
0.5
y values
0.0
0.5
1.0
1.5
Trigonometric Functions
10
0
x values
sin(x), cos(x), and sin(x)+cos(x)
10
>
>
>
>
>
5.8. Solutions
147
If you are interested, plot the curves in a dierent order and see what
happens. If it is not clear which curve to draw rst, you need to calculate
the axis limits before plotting. The above code is rewritten as follows.
>
>
>
>
>
>
We can now plot the graph as before, but we specify the axes limits
explicitly.
> plot(x, y1, xlim=xlimits, ylim=ylimits, xlab="x values",
ylab="y values")
> lines(x, y2, lty=2, col=2)
> lines(x, y3, lty=3, col=3)
> title("Trigonometric Functions",
"sin(x), cos(x), and sin(x)+cos(x)")
Alternatively, the plot command can be used to set up an empty graph
with just axes and labels but no curves. The curves are added afterwards.
> plot(xlimits, ylimits, type="n", xlab="x values",
ylab="y values")
> lines(x, y1)
> lines(x, y2, lty=2, col=2)
> lines(x, y3, lty=3, col=3)
> title("Trigonometric Functions",
"sin(x), cos(x), and sin(x)+cos(x)")
148
5. Graphics
A rectangle
>
>
>
>
Note that the rst point denes the upper right corner of the rectangle,
the second denes the upper left corner, and so on.
5.8. Solutions
149
A circle
>
>
>
>
>
>
150
5. Graphics
(c) The spiral is made simply by revolving several times around the circle
and changing the radius over time. The windings depend very much on the
division. By changing the start, the end, or the division factor, you get very
dierent pictures.
We select a sequence of values from 6 to 32. With a sequence of length
2, we obtain a circle by moving once around its area. Therefore, with a
sequence length of 26, the spiral must have 13 windings.
>
>
>
>
5.8. Solutions
151
sin(8 * x)
1.0 0.5 0.0 0.5 1.0
sin(6 * x)
1.0 0.5 0.0 0.5 1.0
>
>
>
>
>
>
0.0
sin(3 * x)
0.5
1.0
1.0
0.5
0.0
sin(3 * x)
0.5
1.0
1.0
0.5
0.0
sin(3 * x)
0.5
1.0
1.0
0.5
0.0
sin(7 * x)
0.5
1.0
sin(8 * x)
1.0 0.5 0.0 0.5 1.0
0.5
sin(11 * x)
1.0 0.5 0.0 0.5 1.0
1.0
6
Trellis Graphics
154
6. Trellis Graphics
sex=male
age=young
sex=male
age=intermediate
sex=male
age=old
graph of
weight, height
graph of
weight, height
graph of
weight, height
sex=female
age=young
sex=female
age=intermediate
sex=female
age=old
graph of
weight, height
graph of
weight, height
graph of
weight, height
If we switch to the middle column, we can graphically compare the relation between height and weight for the middle age group. Finally, the
rightmost column gives us an idea of how dierent the height/weight
relation is for old men and women.
As an aside, Figure 6.1 displays the origin of the name Trellis, based on
its layout of graphs. Altogether, a graph as sketched in Figure 6.1 displays
the four-dimensional structure in the variables height, weight, sex, and age.
Note Trellis graphs are implemented somewhat dierently from the other
graphics in S-Plus and this is why we treat them in a separate section.
All Trellis graphs follow the same structure, have a consistent interface and
parameter setting, and oer features such as saving graphs in variables.
You might nd Trellis graphs more appealing than the traditional graph
routines just because of the layout, rendering, and default coloring. Therefore, you may want to use Trellis graphs as your default graph type.
6.1 An Example
The data set singer contains measurements of the heights of singers in the
New York Choral Society together with their assigned voice part. Entering
> singer
displays the entire data set. You see that it consists of two variables of name
height and voice.part. We are interested in guring out if the height of
the singer diers between voice part groups. This is exactly the type of
question for which Trellis has appropriate displays. We can get a set of
histograms, one for each voice parts height, by entering
6.1. An Example
60
Alto 2
65
70
Alto 1
75
60
Soprano 2
65
70
Soprano 1
155
75
40
30
Percent of Total
20
10
0
Bass 2
Bass 1
Tenor 2
Tenor 1
40
30
20
10
0
60
65
70
75
60
65
70
75
height
Figure 6.2. Heights of singers for the voice parts in the New York Choral Society.
>
>
>
>
156
6. Trellis Graphics
a function from the Trellis library, whereas hist is the (older) standard
function to generate histograms.
6.2.1
Trellis Syntax
The basic syntax of a Trellis graph follows the syntax for formulas that
are treated in a later chapter in more detail. A formula expresses the
dependencies between variables in the following fashion:
dependent variable explanatory variable | conditioning variable
The expression yx | z is read as y is modeled as x given z. On the
left-hand side of the vertical dash ( | ) is the specication of the relationship
of interest. On the right-hand side of the vertical dash, the conditioning
variables are specied.
An optional further specication is the name of the data set in which
the variables are stored, as in (yx | z, data=singer). The specication
of the data set is only necessary if S-Plus wouldnt nd the variables
otherwise.
Depending on the type of graph, it might not be necessary to specify all
of the three components above. There might be no dependent variable (as
in histogram), or there might be several conditioning variables you wish
to specify. More than one conditioning variable is specied by just listing
them, separated by the multiplication symbol ().
Let us go through a few types of displays. An x-y scatterplot can display
two variables, say x and y. We can generate a simple scatterplot using
xyplot(yx) and a simple histogram of x by entering histogram(x). If
we wanted to group both displays according to the observations of a variable
z, we would enter xyplot(yx | z) and histogram(x | z), respectively.
6.2.2
157
Trellis Functions
The Trellis library comprises a series of functions that follow the syntax described in the preceding sections. We list the functions available
in Table 6.1. All functions take the standard Trellis arguments, examples being data= or aspect= (see the documentation for trellis.args and
trellis.3d.args).
Table 6.1. Trellis functions.
Function
name
Required
arguments
densityplot (x)
histogram (x)
qqmath
(x)
xyplot
bwplot
qq
stripplot
barchart
dotplot
piechart
parallel
splom
cloud
contourplot
levelplot
wireframe
(yx)
(fx)
(fx)
(fx)
(fx)
(fx)
(x)
(d)
(d)
(zy*x)
(zy*x)
(zy*x)
(zy*x)
Graph type
Density estimate graph
Histogram
Quantilequantile graph of a data set versus
a distributions quantiles
x-y scatterplot
Box-and-whisker graph (boxplot)
Quantilequantile plot, f having two levels
1D plot, categorized according to f
Bar chart, categorized according to f
Dotplot with data categorized by f
Pie chart of data in x. x can be table(data)
Parallel coordinates plot
Scatterplot matrix (pairwise graphs)
3D scatterplot
3D contour plot
3D level (color contour) plot
3D surface (wireframe) plot
6.2.3
158
6. Trellis Graphics
> print(xygraph)
or just xygraph prints the graph. If you dont want to store the graph in a
variable,
> xyplot(yx)
is sucient to display the graph on the graphics device.
Note that the command xyplot(yx) on the command line calls the
print function, when entered on the command line. The following note
points out a pitfall when the graph is not created from the command line.
Note If a Trellis graph function is not called from the top level (the
interactive environment), a statement like
histogram(x)
calls the function histogram. The function histogram generates a histogram gure and returns it to the calling function without displaying it.
In those cases, calling the print function explicitly is necessary to display
the graph, as in
print(histogram(x))
Otherwise, there will be no display of the graph calculated. This is
important to keep in mind when writing functions and scripts.
You might want to note that this is consistent with S-Plus behavior in
other cases. Entering a statement like 3*5 on the command line returns the
result; the same statement embedded in a block (like a function, as we will
see later) forces S-Plus to calculate the result, but it will not be displayed
unless you use print.
Here is an example:
> x <- seq(0, 2*pi, length=1000)
> xyplot(sin(x)x)
> title("Sine Curve")
Now, enter the following and observe closely what happens.
> { xyplot(cos(x)x); title("Cosine Curve") }
Try the same and omit the curly brackets.
159
160
6. Trellis Graphics
Table 6.2. Trellis graphics devices.
System
Trellis call
Windows trellis.device(graphsheet)
trellis.device("printer",
format="EPS")
trellis.device("printer",
format="clipboard")
UNIX
trellis.device(device)
Device
Graph sheet window
PostScript printer
Windows clipboard
device can be postscript,
motif, pdf.graph,
java.graph, etc.
Note:
There are more graphics formats S-Plus can write, such as PDF and
JPEG. See the documentation on Devices.
To specify the le to write to, add file="lename" to the function call.
dev.set(), and copy from one device to another using dev.copy. If you
want to close the last device opened, enter
> dev.off()
To close all open devices, enter
> graphics.off()
If the current device is a graph sheet, it will be cleared before a new function
call generates new graphs. Multiple graphs created during a single function
call are all shown on separate pages of the graph sheet. When the next
graphics-generating function is called, the graph sheet is reset to an empty
state. If you want to keep the graphs, just open another graph sheet. The
last device opened is automatically the current device.
Note Our warning about closing devices before printing the graphics is
valid for Trellis devices as well. See the note on page 128.
In case you have entered graphics commands and do not get a full graph,
check if you closed the device (enter dev.list()). If the device was closed,
check if you put a print statement around the call to the graphics function.
Setting Options
To see the current device settings in the form of a graphical display, use
the function show.settings.
> show.settings()
161
A graph is displayed where each plot shows the settings for one specic
option list. The option names are shown below the graph. They are listed
in Table 6.3. Retrieve the specic component you are interested in and look
at its settings.
In the following example, we retrieve the settings for the line width in
Trellis graphs, change its value, and store the settings back to the system
settings such that they become active for all lines in Trellis displays.
> trellis.plot.line <- trellis.par.get("plot.line")
> trellis.plot.line$lwd <- 3
# change line width
> trellis.par.set("plot.line", trellis.plot.line)
For all subsequent graphs of the current session, line widths will be 3.
Table 6.3. Trellis plot options.
Option name
add.line
box.dot
dot.line
plot.line
strip.background
wires
add.text
bar.ll
box.rectangle box.umbrella
dot.symbol
ll
plot.symbol
reference.line
strip.shingle
superpose.line
box.3d
cloud.3d
pie.ll
regions
superpose.symbol
6.4.2
If the conditioning variable has eight categories, for example, one might
not want to plot them on a 2 4 or 4 2 arrangement but rather in a 3 3
layout with one of the elds left blank. The layout is specied by using
the layout parameter to dene the overall layout, in this case by setting
162
6. Trellis Graphics
layout=c(3,3) in the call to the Trellis function. Where the graphs are
positioned or if a eld is left blank is indicated by a vector of logical values.
A T (TRUE) in the skip argument indicates that the eld is to be left blank.
We use the singer data to generate the layout described.
> histogram(height | voice.part, data=singer,
layout=c(3,3), skip=c(rep(F,4),T,rep(F,4)))2
60
Alto 1
65
70
Soprano 2
75
Soprano 1
40
30
20
10
0
Alto 2
Percent of Total
Tenor 1
40
30
20
10
Bass 2
Bass 1
Tenor 2
40
30
20
10
0
60
65
70
75
60
65
70
75
height
The result is shown in Figure 6.3. The rst value of layout determines
the number of columns in the graph, the second the number of rows (just
in contrast to the specication of a matrix dimension). As a 3 3 matrix
consists of 9 elds, the parameter skip is a logical vector of length 9 with
a T in position 5 to indicate that eld 5 is to be left blank.
If layout is a vector of length 3, the third value denes the number of
pages to generate.
In the same way, one can change the aspect ratio of a graph. The aspect
is the ratio of the height and width of a graph, or, to be precise, aspect
ratio = y-axis length/x-axis length.
2 The
footnote about changing the strip color (page 155) holds here as well.
163
6.4.3
Ordering of Graphs
You might have noticed that S-Plus has a standard way of arranging the
graphs belonging to certain levels of the conditioning variables. If there is
only a single conditioning variable, the graph that conditions on the rst
level of the conditioning variable shows up in the lower left corner. The
second graph is to the right of it, until the bottom row is lled up. The
second row from the bottom is lled next, and so it continues until all
rows are lled. Check out the arrangement of levels in the original data by
querying the levels of the conditioning variable. For the Barley data set,
the levels of the variable site are obtained as follows:
> levels(singer$voice.part)
"Bass 2"
"Bass 1"
"Tenor 2"
"Alto 2"
"Alto 1"
"Soprano 2"
"Tenor 1"
"Soprano 1"
164
6. Trellis Graphics
Note You might wonder why the ordering is not in the way one reads a
book from the top left to the lower right. You can arrange the graphs in
this way by setting as.table=T.
The default arrangement is as described because it creates the analogy
to numeric x-axis and y-axis in putting the rst (the smallest) value to
the left on the x-axis and to the bottom on the y-axis. This is especially
important if the categories are ordered. If the categories represent ordered
factors such as low, medium, and high education or income, placing the
lowest category lowest (physically) in a graph makes the reading of the
graph much more intuitive. For example, the group with low education and
low income is in the bottom left corner of the graph, whereas the group
with high education and high income is in the top right corner, where the
high x and y values are found on a numeric scale.
The order of the graphs (panels) within one of the two arrangements is
determined by the levels of the conditional factors. Changing the order of
the levels is described in Section 12.3 (pp. 378 .).
6.4.4
Axis Customization
165
There are more possibilities, for example specifying to what side of the
panels to place the axes or how much space to leave between panels. For
more details, see the documentation for trellis.args.
6.4.5
The panel strips are the horizontal bars on top of each graph in a Trellis
layout. They indicate the conditioning variables values for the graph.
For the 2D Trellis functions, there is an argument strip: an optional
specication of a function that draws the strips across the panels. It is preset with a call to the function strip.default. Take the example of calling
dotplot. A call to dotplot such as
> dotplot(yearyield | site, data=barley)
is equivalent to
> dotplot(yearyield | site, data=barley,
strip=function(...) strip.default(...) )
If you are interested in all details, have a look at strip.default or check
only the arguments of the function using
> args(strip.default)
You will see that strip.default is pretty long, but we can make
our life simple. We can dene a new strip function that simply calls
strip.default with dierent settings. By looking at the documentation
for strip.default, we nd that the setting we are interested in for this
example is the style setting. We want to set it to the value 1 to achieve
that the full strip label is colored in background color and the text string
for the current factor level is centered in it (documentation excerpt).
> dotplot(yearyield | site, data=barley,
strip=function(...) strip.default(..., style=1) )
does the job for us.
6.4.6
If you use Trellis graphs, you can experiment with the arrangement of the
graphs by creating the graphs and storing them in variables. Once the
graphs are generated, they can be moved around on the page.
There are basically two ways of getting a multigraph page. First, a matrix of graphs with all graphs in rectangular equally sized areas can be
generated. Second, we can give the separate graphs dierent sizes and any
location on the page. Let us start with the rst layout type.
> graph1 <- xyplot(durationwaiting, data=geyser)
> graph2 <- histogram(waiting, data=geyser)
6. Trellis Graphics
25
20
15
10
5
0
Percent of Total
Percent of Total
166
40
60
80
100
100
80
60
40
20
0
40
1 2 3 4 5
50
60
1 2 3 4 5
70
80
90
100
110
1 2 3 4 5
1 2 3 4 5
duration
duration
Percent of Total
waiting
5
4
3
2
1
50 60 70 80 90 100 110
100
80
60
40
20
0
25
20
15
10
5
0
1
waiting
duration
167
If the graphs should be of dierent size, note that the graphics pages lower
left corner (LL) is denoted by the coordinates (0, 0), and the upper right
corner (UR) by the coordinates (1, 1). We can position a graph on the page
by specifying its lower left and upper right corner on the page in the form
(xLL , yLL , xU R , yU R ).
> print(graph1, position=c(0, 0.35, 1, 1), more=T)
> print(graph2, position=c(0, 0, 1, 0.45))
Entering these commands generated Figure 6.6.
duration
5
4
3
2
1
50
60
70
80
90
100
110
Percent of Total
waiting
25
20
15
10
5
0
40
60
80
100
waiting
6.4.7
If a Trellis graph is stored in a variable, it can surely be modied. A function to modify stored Trellis graphs is the update function. Suppose we
generated a graph by entering
> graph <- xyplot(durationwaiting, data=geyser)
If we decided later to add a title and make the axis tick labels twice as
big, we could update the graph stored in the variable graph by entering
> bettergraph <- update(graph, main="A new title",
scales=list(cex=2))
In this fashion, all settings can be updated and stored to a new variable.
168
6. Trellis Graphics
6.4.8
The panel function is invoked to create each single graph within a Trellis graph matrix. For example, when using xyplot, the panel function
panel.xyplot is a function that takes a vector x and a vector y and plots
them against each other. The data have already been subset such that only
the data that match the current subset of the conditioning variable are
passed to the panel function.
So if you want to change how x is plotted against y, the panel function
is the one to look at. A list of existing panel functions is shown in Table
6.4. Very often, a new panel function can be created by combining existing
ones; adding a regression line to the graph is such an example.
Table 6.4. Trellis panel functions.
panel.abline
panel.cloud
panel.dotplot
panel.histogram
panel.loess
panel.piechart
panel.qqmath
panel.splom
panel.tmd
panel.barchart
panel.contourplot
panel.fill
panel.levelplot
panel.pairs
panel.plot.shingle
panel.qqmathline
panel.stripplot
panel.wireframe
panel.bwplot
panel.densityplot
panel.grid
panel.lmline
panel.parallel
panel.qq
panel.smooth
panel.superpose
panel.xyplot
Most of the names of the functions clearly identify their purpose. See the
online documentation for details. Functions like points, lines, and text
can, of course, be used in panel functions. See the following text for proper
retrieval of parameter settings.
To smoothly integrate a new panel function into the Trellis system, the
function should not set styles and layouts directly (e.g., by using a setting
such as col=3 to change the color of the plot). Adding a line from within
a panel function should query the Trellis line settings and use those. Here
is an example:
line.settings <- trellis.par.get("plot.line")
lines(x, y, lty=line.settings$lty,
col=line.settings$col, lwd=line.settings$lwd)
Doing it this way, the function leaves all control to the user in setting graphical parameters. The parameters are not hard-coded and unchangeable
without editing the code.
169
x <- trellis.par.get("plot.symbol")
x$col <- 1
# black
x$pch <- "x"
# plot character x
trellis.par.set("plot.symbol", x)
Finally, we create the second graph with an added regression line and
display it.
> gr2 <- xyplot(durationwaiting, data=geyser,
panel=panel.reggraph)
> print(gr2, position=c(0.5, 0, 1, 1))
The gure obtained is shown in Figure 6.7. Note that if we display gr1
and gr2 after modifying the settings, the new settings are applied to both
graphs, as the settings are not stored with the graph.
Note You can only query the device settings if a device is open. Thats
why S-Plus opens a Trellis device, if you didnt do it yet. You can query
Trellis device settings even if a dierent device such as graph sheet is open,
but if you try to set Trellis parameters to a non-Trellis device, you will get
an error message.
Finally, it is often the case that you want to do something that is almost
done by an existing panel function. If that is the case, make your life easy:
170
6. Trellis Graphics
4
duration
duration
x
x
1
50 60 70 80 90 100 110
x
xx xx xx x
x
x
x xxxx xx xxx x x
x
xxxxxxx xxxx x x xx x x
x
x
x
x
x
x
xx x
x
x xx xx xxx x
x xxxx xxxxxx x xxx xxx xxx x
xxx xxxx x x
x xxxx x
x
x
x
xxxxxxxxxxxx
xx xxxxxxxxx xx xxxxxxxx
x
x x xx xxx
x
x xxx
x xx
x
x x
xxx
x
x
x
x x
x xx
x
xx
xxx
xxxx
x
xxx xxxxxxxxxxxxxxxxxxxxxx xx
x x x xx x
xxx xxxxxxxxxxxxx xx xxx
xx xx
50 60 70 80 90 100 110
waiting
waiting
Figure 6.7. Trellis display (xyplot) of the Geyser data. The right graph is
constructed with a customized panel function.
store the panel function under a new name and modify it according to your
needs.
For example, say you want to plot condence intervals instead of boxplots but in exactly the same way as boxplots are used. The boxplot panel
function is panel.bwplot. Copy the panel, edit the copy, and change the
line
q <- quantile(X, c(0.75, 0.5, 0.25))
to
q <- quantile(X, c(1, 0.5, 0))
Calculate the condence intervals and store them in a data frame such that
it looks like the following:
limits
0.34
0.61
1.04
0.11
0.36
0.92
group
A
A
A
B
B
B
171
Call bwplot now using the new panel function, and supply the data
frame created, conditioned on the variable group. If the name of our data
frame containing the condence interval limits is conf.data and our panel
function is stored as panel.conf, we would call
> bwplot(grouplimits, panel=panel.conf, data=conf.data)
172
6. Trellis Graphics
Suppose you have patient data, blood pressure before, during and after
treatment with dierent drugs. If you are interested in each individual
patients data, you would possibly generate a panel per patient showing
all corresponding data. If you are interested in the treatment eects over
time for the dierent drugs, you would possibly want one panel per drug,
each displaying a drugs treatment eect at each time point (for example,
as boxplots). If you want to compare the treatment eects between the
drugs for each time point, there could be a panel per time point, and each
panel would show the data for each of the drugs at this time point (for
example, as boxplots). To take it to more dimensions, you could look at
the distributions of the treatment eects using histograms and displaying
a histogram for each combination of drug and time point.
Note in this example that the boxplots over time display two variables in
each panel. The graph is bivariate. In contrast, the histogram is a univariate
display. A panel can display even more than two dimensions, for example
by using 3D graphs or by color-coding the dierent subgroups.
6.5.1
Putting together some of the settings we went through and some that we
did not introduce yet, you might want to use the following settings for
many graphs that you create in a Trellis-type style.
The settings below produce Trellis graphs without the bar running across
the strip above each panel, larger characters in the strip, graphs arranged
from top left to bottom right, and specic character sizes for axis tickmark
labels and plot symbols.
strip.empty <- function(...)
{ strip.default(..., style=1)}
Trellis function(formula,
data=data set name,
strip=strip.empty,
as.table=T,
scales=list(cex=0.7),
par.strip.text=list(cex=1.25),
cex=1.5
)
The setting as.table=T lets Trellis arrange the panels row-wise from the
top left to the bottom right. The default setting is to arrange the panels
row-wise from the bottom left to the rop right. The default makes sense if
you have two conditioning variables with an inherent order, such that the
position of the panel actually reects the order of the conditioning variables
appropriately, for example from small to large). Very often though there
173
is only one conditioning variable and panels across several rows, such that
the default ordering is not that meaningful.
The function strip.empty denes a strip function that does not put a
bar into the strip that moves from left to right. This function can be
passed to the Trellis function in the strip= argument. You could as well
pass the denition of the function strip.empty to the Trellis function if
you use this setting just once. If you use it more than once though, declaring
the function strip.empty and using it repeatedly is more ecient (and the
code is more legible).
The setting scales=list(cex=0.7) sets the character expansion for
the x- and y-axis to 70 percent of the default character size. Setting
scales=list(x=list(cex=0.7)) reduces only the character size of the
x-axis.
The text inside each strip across a panel gure is enlarged by 25 percent
if the setting par.strip.text=list(cex=1.25) is used, and cex=1.5 enlarges characters within each gure by 50 percent (for example, the size of
circles to denote observations in a scatterplot).
6.5.2
If the data set contains time proles for individual subjects (for example,
some blood concentration measurements of patients over time), you can
use the panel function panel.superpose to display individual proles.
The Theophylline data set stored in Theoph contains time proles of individuals with Theophylline concentrations measured over time. Additional
variables contain dose and body weight of the subjects.
We graph the data and subset by dose group. The following commands
generate Figure 6.8.
> xyplot(concTime | Dose<4.5, groups=Subject,
type="o", data=Theoph, panel=panel.superpose)
Note that the strip across each of the two panels reads Dose<4.5: the
conditioning variable, Dose<4.5, is a logical variable, either true or false.
The left panel shows the graph for doses less than 4.5 or for those doses
where Dose<4.5 is false, the right panel shows the ones where Dose>4.5
is true. If you want labels that are better interpretable, you can replace the
term Dose<4.5 by ifelse(Dose<4.5, "low dose", "high dose").
If you have more than two categories and want to dene the order of
factor levels explicitly, use the function cut instead of ifelse:
cut(Dose, c(min(Dose)-1, 4, 5, max(Dose)+1), paste(c("low",
"medium", "high"), "dose")).
Alternatively, the data set can be expanded by dening a new variable
dosegroup:
> Theoph.extended <- Theoph
174
6. Trellis Graphics
10
15
Dose < 4.5
20
25
conc
o
o o
o
o
o o
10
o
o
oo
o o o
o
o
o o o
o
oo o
o
o o
o
o
o
oo
o
o
o
o
o
o
o
o oo
o
o
o
o
o
o
o o
oo
o
o
o
o
o o o o
o
o
o
o
o o
o
o
o
o
o
o
o o
o
o o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
0
o
o
o
0
o
oo
o
o
o
o
o
oo
oo
o
5
10
15
20
25
Time
6.5.3
Suppose the data at hand are stored in four dierent objects, all of them
numeric vectors. If a Trellis display shall be used to show the four variables
in a single Trellis graph, one panel per variable, the function make.groups
comes in handy. It concatenates the four vectors to a single one and creates
a second vector indicating the source of the data (the names of the four
objects in our case).
The outcome is a data frame with two variables, data and which. The
component data has the concatenated data, and which has a vector indicating which variable the data came from. Lets use Trellis to plot a
histogram of a uniform and a normal random variate.
> x <- rnorm(10)
> y <- runif(10)
# random normal
# random uniform
175
6.5.4
Trellis graphs allow you to display only parts of the data, selected according
to a condition. Suppose we want to display the Geyser data but only the
data where the waiting time is less than 50 minutes. Instead of subsetting
all variables and then calling the graphics function, we can tell Trellis to
do the data selection for us.
> xyplot(waitingduration, data=geyser,
subset=(waiting<50))
The subset option needs to be a logical vector with as many values as
there are rows in the data set. Each value, either TRUE or FALSE, species
whether or not to include the particular row in the graphical display.
Note in particular that this oers a way to exclude missing values (Not
Available values) by setting subset=!is.na(x).
Of course the logical expression in subset can be a test on more than
one variable. For example, we can test if one or more of several variables,
say x, y, and z, contains a missing value: we extract from a matrix, say
data, the columns of interest, x, y and z, and test per row (dimension
1 for the apply function) if any value is not available. We want to
include the rows where this is not the case, such that the corresponding expression is subset=!apply(is.na(data[, c("x", "y", "z")]),
1, any). An alternative is to specify subset=rowSums(is.na(data[,
c("x", "y", "z")]))==0.
6.5.5
Adding a Key
A key or legend can be added to a graph by using the key argument or the
key function.3 To add a key with two point symbols (+ and ) and
3 There is also a function legend that can be used for traditional-style graphics. That
function is unsuitable for Trellis-type graphs.
176
6. Trellis Graphics
descriptions Data Set A and Data Set Bto a Trellis graph, one can call
the key function with a command like
> key(points=list(c("+", "*")),
text=list(c("Data Set A", "Data Set B")))
Alternatively, the key specications can be passed to the Trellis function
using a command like
> xyplot(yx, data=data,
key=list(points=list(c("+", "*")),
text=list(c("Data Set A", "Data Set B"))))
Finally, the key can be added after the graph has been created by using
the update function. In such a case one would enter commands like
> gr <- xyplot(yx, data=data)
> update(gr,
key=list(
points=list(c("+", "*")),
text=list(c("Data Set A", "Data Set B"))
))
One could use lines instead of character symbols in the command above by
replacing points=list(c("+", "*")) with lines=list(col=2:3), which
adds two lines of dierent colors.
Note Trellis graphs use their own special coordinate system to create
the graph on the graphics device. You therefore need to either supply the
argument key= to the call to a Trellis function, update a stored Trellis
graph with a key, or call the key function from within your own panel
function.
Once the graph is completely created, Trellis coordinates are not easily
available any longer. If you use the key function on its own, your key might
end up in an odd place or even outside the drawing region. All in all, do
not call the Trellis function and then the key function outside the Trellis
function.
Note The ordering of the key parameters determines their order in the
key. Specifying
> key(lines=list(col=2:3, lty=1:2),
text=list(c("Curve 1", "Curve 2")))
puts the line to the left of the text, whereas
> key(text=list(c("Curve 1", "Curve 2")),
lines=list(col=2:3, lty=1:2))
does just the opposite.
177
Look in the manuals for a series of further examples. The manual and
the online help provide a full list of all available options, among them
title for putting a title string on top of the key, border for framing the
key in a specied color, background for specifying dierent background
colors for the key or making it transparent, and columns for specifying an
arrangement of the key entries in multiple columns.
6.5.6
as.data.frame(Theoph)
Next we convert the values in the Subject variable of the data frame from
values like 1, 2, 3, ..., 12 to values like Subject 1, Subject 2, etc. Trellis
functions sort the panels according to the levels of the factors used for
conditioning. If a character vector with values 1, 2, . . . , 11, 12 is sorted, the
result is going to be 1, 10, 11, 12, 2, 3, . . . , 9, an undesired result in our case.
The trick is to add leading spaces to the subject numbers so that the sort
will have the desired order. The leading spaces are added using the format
4 The data set Theoph comes with the nlme3 library and is an object of class groupedData. The class groupedData denes its own plot function. You can enter plot(Theoph)
to see such a plot.
If you work with longitudinal data (several observations per unit, for example per
patient, over time), you probably want to take a look at the nlme3 library and the book
by Pinheiro and Bates (2000).
178
6. Trellis Graphics
function. We add the string Subject by using the function paste. The
resulting vector containing values like Subject 1, Subject 2, etc. gets
stored in a new variable PID (Patient Identier) in the data set x.
> x$PID <- paste("Subject", format(x$Subject))
The Theophylline data set contains 12 subjects numbered 1 to 12. We introduce articial dose groups for illustration purposes by drawing randomly
either A, B, or C for each subject. We draw the dose group per subject (with replacement) and add the variable treatment to the data set x
by taking for each row the treatment group for the subject in that row.
> trt.group <- sample(c("A", "B", "C"), 12, replace=T)
> x$treatment <- trt.group[x$Subject]
The data set, x, is now ready to be used. Let us dene the panel function
that does what we described above, adding the dosing events to the time
axis, and the treatment group to each subjects panel.
#
#
#
#
#
#
#
#
#
#
Arguments:
x-concentration
y-time
subscripts-row indices (passed by Trellis)
trt-treatment group.
Must correspond to names of dosing and doses
dosing-named list defining dosing times for each
treatment group
doses-named list defining dose amounts for each
dosing event
179
We can now use the treatment group dened in trt to lter out the dose
amounts and dosing times from doses and dose.time. Note that we use
the names of the list elements for extraction (trt is either A, B, or C).
dose <- doses[[trt]]
dose.time <- dosing[[trt]]
If a treatment group that does not have a specication of dose or dose time
is found, we issue a warning.
if(length(dose)==0)
warning(paste("Unknown dose for trt", trt))
if(length(dose.time)==0)
warning(paste("Unknown dosing times for trt", trt))
Finally, we add to the time axis at each dosing time the amount of dose
given, with smaller character sizes (cex=0.8) and left-adjusted (adj=0).
text(dose.time, rep(0, length(dose.time)), dose,
col=5, cex=0.8, adj=0)
The last information added is the treatment group of the particular subject in the top right corner. We determine the current coordinates using
par("usr") and place the text to the right margin and 90 percent up the
way from the minimum to the maximum of the ylimits.
coord <- par("usr")
# vector of x-min, x-max, y-min, y-max
text(coord[2], coord[3]+0.9*(coord[4]-coord[3]),
paste("Trt Group", trt), adj=1, col=3)
}
This completes the declaration of our customized panel function. We can
now create the graph by calling xyplot and telling it to use our own panel
function:
strip.empty <- function(...)
strip.default(..., style=1)
xyplot(conc Time | PID, data=x, trt=x$treatment,
strip=strip.empty, panel=panel.conctime, as.table=TRUE)
The call to xyplot contains a few special settings: we use strip.empty as
strip function to suppress the bar in the strip across each panel, we specify
panel.conctime as the panel function to be used, set as.table=TRUE to
have the subjects arranged from top left to bottom right, and supply in
addition the argument trt.
We can not specify trt=treatment because it will not be found in our
data frame x. We need to specify trt=x$treatment instead. This indicates
that the full length vector will be passed to each panel.
180
6. Trellis Graphics
The nal result is shown in Figure 6.9: a Trellis graph with one panel
per patient (PID), each panel indicating what dose given at what time, and
the treatment group indicated in the top right corner of the panel.
0
10
15
Subject 2
Subject 1
10
8
6
Trt Group C
2
0
25
conc
10
8
6
50
25
Subject 9
Trt Group B
50
50
Subject 6
Subject 10
15
20
25
100
50
50
25
25 25 25
10
25
15
25 25 25
2
0
Subject 12
50
20
10
8
Trt Group B
Trt Group C
25
Trt Group C
20
Subject 8
Trt Group B
100
50 50 Subject 11
Subject 7
Trt Group B
10
25 25 25
10
15
Subject 4
Trt Group A
50
Trt Group A
Trt Group C
0
Subject 3
Trt Group B
50
Subject 5
50
25 25 25
25
Trt Group B
20
50
25
Time
Figure 6.9. Concentration-time proles in the Theophylline data set with dosing
times and treatment groups indicated, using the subscripts argument.
6.6. Exercises
181
6.6 Exercises
Exercise 6.1
Examine the data set singer. The variable voice.part is a factor with
eight levels, Soprano 1 and 2, Alto 1 and 2, Tenor 1 and 2, and Bass 1 and 2.
Reduce the levels in voice.part to four levels only by putting Soprano 1
and Soprano 2 into a Soprano category, Alto 1 and Alto 2 into an Alto
category, and so on.
Graph the data in a Trellis histogram.
Hint: Check for the levels of the factor variable by using the levels
function. Convert the factor variable that is internally coded as integer to
integer numbers and use cut for regrouping the data.
Exercise 6.2
Create Trellis histograms for 50 random normal and random uniform numbers. Add a key to the graph that displays the number of observations,
mean, and variance for each group (each panel).
Hint: Use the function make.groups to generate the data together with
a conditioning variable. The functions split and sapply can be used for
calculating the summary statistics.
Exercise 6.3
Consider the geyser data set. Display the duration of an eruption (y-axis)
against the waiting time for this eruption (x-axis) using xyplot. Add a
linear regression line and a loess regression curve using the S-Plus function
loess. Exclude the estimated values 2, 3, and 4 that were originally
recorded as short, medium, and long, and repeat the regression using loess.
Add a key to the graph using the function key.
Consider the hypothesis that the duration of the eruption depends on
the preceding waiting time. Rearrange the data set accordingly and redo
the graph above accordingly.
Exercise 6.4
Write a panel function that shows the following characteristics of a data
set graphically:
Minimum and maximum as color bars
182
6. Trellis Graphics
i
h
g
f
e
d
c
b
a
3
value
Figure 6.10. Trellis display of data spread, mean, and standard deviations.
6.7. Solutions
183
6.7 Solutions
Solution to Exercise 6.1
In order to categorize the factor variable voice.part into new categories,
let us rst look at what levels the variable currently has.
> levels(singer[,"voice.part"])
"Bass 2" "Bass 1" "Tenor 2"
"Alto 2" "Alto 1" "Soprano 2"
"Tenor 1"
"Soprano 1"
8
7
6
6
5
3
2
2
1
8
7
6
6
5
3
2
2
1
8
7
6
6
5
3
2
2
1
8
7
6
6
5
3
2
2
8
7
6
5
4
3
2
1
8
7
6
5
4
3
2
1
8
7
6
5
4
3
2
1
8
7
6
5
4
3
2
1
8
7
6
5
4
3
2
1
8
7
6
5
4
3
2
1
8
7
6
5
4
3
2
1
We group the data into new classes by using the cut function. We specify
the breaks in between the integer values, such that it is clear to what group
a value is assigned. The function cut takes a labels argument for assigning
tags to the groups. It does not generate a factor though, such that we must
convert the grouped data to factors explicitly.
> voicepart.4gr <- cut(as.integer(singer[,"voice.part"]),
breaks=c(0.5,2.5,4.5,6.5,8.5),
labels=c("Bass", "Tenor", "Alto", "Soprano"))
> voicepart.4gr <- factor(voicepart.4gr)
Now, it is fairly easy to graph the data in a histogram display.
> histogram(height | voicepart.4gr, data=singer)
We nally copy the data set singer from the system directory to our
local directory and add the new variable voicepart.4gr to it. To avoid
confusion, the local variable voicepart.4gr gets deleted.
> singer <- singer
# store local variable
> singer[, "voicepart.4gr"] <- voicepart.4gr
> rm(voicepart.4gr)
# remove global variable
The last statement, removal of the variable voicepart.4gr as separate
variable, is important. You can easily generate confusion if there is a variable of the same name (voicepart.4gr) in the interactive environment
and in a data frame (singer) of the same name.
184
6. Trellis Graphics
xm <- round(xm, 3)
xv <- round(xv, 3)
Finally, we put together a string vector that contains the information we
want to display. Note that we can derive the names of the groups that
correspond to the names displayed in the Trellis strips on top of the graphs.
This is done by using the names of the vectors generated by sapply.
groupnames <- names(xn)
gr <- histogram(data | which, data=x)
str <- c("Observations",
paste(groupnames, ":", xn),
"",
"Means",
paste(groupnames, ":", xm),
"",
"Variances",
paste(groupnames, ":", xv),
)
We have put the information we want to display into a string, so the only
thing left to do is to show the text in the graph by using key. We update
6.7. Solutions
185
2
runif(50)
0
rnorm(50)
60
50
Percent of Total
Observations
40
runif(50) : 50
rnorm(50) : 50
Means
runif(50) : 0.562 30
rnorm(50) : 0.267
Variances
runif(50) : 0.083
rnorm(50) : 0.745 20
10
0
2
data
If you take a closer look at Figure 6.11, you notice that the categories (the
bins of the histogram) are chosen somewhat unluckily. Although random
uniform numbers are drawn only from the interval (1, 1), the histogram
categories go beyond 1 and +1. You can try to gure out how to change
the bins such that 1 and +1 are used as boundaries.
Note In some versions of S-Plus, you will not see histograms but only
empty graphs (for example, in S-Plus for Linux 5.0 and 5.1 but not 6.0)
due to a problem with the match function handling category data (such as
x$which in our case).
186
6. Trellis Graphics
In our case, we can simply convert x$which to a factor using x$which <as.factor(as.character(x$which)), or we could redene the function
match for category data. The following x was kindly provided by Chuck
Taylor at MathSoft:
setMethod("match", c("category", "ANY"),
function(x, table, nomatch=NA, incomparables=F)
match(as.character(x), as.character(table), nomatch,
incomparables)
)
The same problem might occur for factor-type data: replace "category"
by "factor" in the rst line of the statement to redene the corresponding
function. We will get to the details of statements like the one above in the
chapter on object-oriented programming.
6.7. Solutions
187
duration
50
60
70
80
90
100
110
waiting
Figure 6.12. Trellis display of the Geyser data set (duration vs. waiting time)
with regression lines added.
key(
x=min(args$x),
y=min(args$y),
corner=c(0,0),
lines=list(lty = 1, col=2:4),
text=list(c("Loess with all data",
"Loess without values 2, 3, 4",
"Linear regression on all data"
),
col=1, font=2, adj=0)
)
}
)
Note that the call to key is also a part of the panel function to ensure that
it works on the current graph settings.
To redo the graph on the modied data set, we basically only need to
specify the new data set. To have the waiting time and its preceding duration on the same row of the data set, we need to omit the rst value of the
188
6. Trellis Graphics
waiting times (shifting the data by 1). The last value of the duration measurements does not have a corresponding waiting time and gets excluded,
too.
Of course, the main title, the axis labels, and the panel function that
works on x and y need to be slightly changed. The resulting graph is shown
in Figure 6.13.
Regression of Geyser eruption waiting time on preceding duration
110
Loess with all data
Loess without values 2, 3, 4
Linear regression on all data
100
waiting
90
80
70
60
50
duration
Figure 6.13. Trellis display of the Geyser data set (waiting time vs. duration)
with regression lines added.
xyplot(waiting[-1]duration[-length(duration)],
data=geyser,
main="Regression of Geyser eruption waiting time on
preceding duration",
xlab="duration", ylab="waiting",
panel=function(...)
{
panel.xyplot(...)
panel.loess(..., col=2)
args <- list(...)
omit <- args$x==2 | args$x==3 | args$x==4
6.7. Solutions
189
<<<<-
s <- 1:group.no
190
6. Trellis Graphics
poly.x <rep(NA,
poly.y <rep(NA,
left
<- xmean-spread*xsdev
middle <- xmean
right
<- xmean+spread*xsdev
bottom <- s-linew
mid
<- s
top
<- s+linew
line.x <- rbind( left, left, left, middle, middle,
middle, middle, right, right, right, NA)
line.y <- rbind(bottom, top, mid,
mid,
top,
bottom,
mid,
mid,
top, bottom, NA)
# retrieve graphical settings
trellis.par.get("add.line")
trellis.par.get("bar.fill")
# draw nally
6.7. Solutions
191
The function still works; we get nothing shown for category g (Figure
6.14).
i
h
g
f
e
d
c
b
a
3
value
Figure 6.14. Trellis display of data spread, mean, and standard deviations. Group
g consists of only missing values.
Finally, how can we change the (vertical) width of the bars and lines,
as we have made them parameters to the function? Within the call to
the Trellis function, we dene a new panel function that just calls the
panel.spread panel function with specic parameters.
# bigger boxes and higher lines
192
6. Trellis Graphics
0
d
i
h
g
f
e
d
c
b
a
i
h
g
f
e
d
c
b
a
3
value
Figure 6.15. Trellis display of data spread, mean, and standard deviations with
non-default parameter settings.
7
Exploring Data
194
7. Exploring Data
195
variable, variety, contains the name of the barley variety harvested. The
variables year and site contain the year and the name of the site.
To get a more complete overview, we can summarize each variable on its
own by using the summary function.
> summary(barley)
yield
Min.:14.43
1st Qu.:26.87
Median:32.87
Mean:34.42
3rd Qu.:41.40
Max.:65.77
variety
Svansota:12
No. 462:12
Manchuria:12
No. 475:12
Velvet:12
Peatland:12
Glabron:12
No. 457:12
Wisc No.38:12
Trebi:12
year
1932:60
1931:60
site
Grand Rapids:20
Duluth
:20
Univ Farm
:20
Morris
:20
Crookston
:20
Waseca
:20
In fact, the summary function shows by default up to seven dierent values for factor variables, such that entering the command as above gives a
slightly compressed output. We used summary(barley, maxsum=10) to set
the maximum number of rows displayed to 10.
Now, we investigate the data further. To have easier access to the
variables contained in barley, we attach the data set.
> attach(barley)
This gives us direct access to the variables yield, variety, year, and site.
Let us rst take a look at what the distribution of yield looks like by using
the stem and leaf display.
> stem(yield)
N = 120
Median = 32.86667
Quartiles = 26.85, 41.46666
Decimal point is 1 place to the right of the colon
1 : 4
1 : 579
2 : 0011122223333
2 : 555666666667777777889999999
3 : 0000001112222223333344444
3 : 5555667777888899
4 : 000112223334444
4 : 567777779999
5 : 00
5 : 5889
6 : 4
6 : 6
196
7. Exploring Data
The data are ordered and categorized by a base times a power of 10.
The stem and leaf display tells us that the data are put into categories
10, 20, 30, . . . , 60, because the decimal point is 1 place to the right of
the colon. The left-hand side displays the category (or interval), and the
right-hand side of the colon has one digit per observation, displayed by the
rst digit following the category digit. We can immediately see that the
smallest values are 14, 15, 17, and 19 (rounded, of course), and by far the
largest values are 64 and 66 bushels per acre. Most of the yields are in
the high twenties and low thirties, and the distribution looks rather symmetric, having a longer tail towards the larger values (in other words, it is
right-skewed).
The stem and leaf plot can be viewed as a sort of sideways histogram
from which we can see that the data approximately exhibit the bell shape
typical of a normal or Gaussian distribution with no extreme values or
outliers. As with the histogram, the display sometimes strongly depends
on the categorization layout chosen. For glimpsing the distribution and
detecting extreme values in the set, the display above is sucient. However,
if we want to determine whether the distribution is approximately bellshaped like a normal distribution, we would need to display it with dierent
categorizations or use more sophisticated techniques.
If we want to apply the stem display to the other variables of barley, we
would get an error message.
> stem(variety)
Problem in Summary.factor(x, na.rm = na.rm):
A factor is not a numeric object
Use traceback() to see the call stack
What we realize here is that S-Plus behaves somewhat intelligently. It
knows that variety is a factor variable with values such as Svansota
and Manchuria and that a stem and leaf display for such a variable does
not make much sense. Therefore, S-Plus simply refuses to do the desired
display and tells us that our attempt is not meaningful for factors.
If we want to investigate the skewness of the yield variable further, we could use the quantile function and calculate the quantiles at
10, 20, 30, . . . , 90%.
> quantile(yield, seq(0.1, 0.9, by=0.1))
10%
22.49667
60%
35.13333
20%
30%
40%
50%
26.08 28.09 29.94667 32.86667
70%
80%
90%
38.97333 43.32 47.45666
This shows us that the 10% quantile (22.5) is about 10 units away from the
median (32.9), whereas the 90% quantile is 47.5, being about 15 bushels per
acre away from the median. For a symmetric distribution, the two quantiles
would be about the same distance away from the median. A measure of the
197
data spread is the interquartile distance, the dierence between the 25%
and the 75% quantile.
> quantile(yield, c(0.25, 0.75))
25%
26.875
75%
41.4
This tells us that approximately 50% of the data lie within the interval
of 26.9 to 41.4 bushels per acre. For perfectly symmetric data, the median
would be about 34, equally far away from both quartiles. The median 32.87
is a little o, such that making a judgement about the symmetry of the
yield variable might require further investigation.
Another measure of spread is the variance, or the standard deviation.
S-Plus has a built-in function for both, var for the variance and stdev
for the standard deviation.
> stdev(yield)
10.33471
We can also use what we have learned about selecting subsets from data. We
can calculate some interesting statistics by conditioning on certain values.
Let us check if the barley harvest was very dierent in 1931 and 1932.
Selecting only the values of yield that were obtained in 1931, we obtain
> summary(yield[year==1931])
Min. 1st Qu. Median Mean 3rd Qu. Max.
19.70 29.09
34.20 37.08 43.85
65.77
and for 1932, we obtain
> summary(yield[year==1932])
Min. 1st Qu. Median Mean 3rd Qu. Max.
14.43 25.48
30.98 31.76 37.80
58.17
As we can see, all the numbers are larger for 1931. The 1932 harvest seems
to have been much worse than the 1931 yield, measured in bushels per acre.
We could determine the most fruitful sites for both years to see if they
were the same in both. A good idea is to select the top 10%, the sites above
the 90% quantile. We get the 90% quantile for both years as
> quantile(yield[year==1931], 0.9)
90%
49.90334
> quantile(yield[year==1932], 0.9)
90%
44.28
and retrieve all rows of the Barley data set where the yield in 1931 lies
above the 90% quantile.
> barley[yield>49.90334 & year==1931, ]
198
7. Exploring Data
8
20
26
32
38
56
yield
variety
55.20000
Glabron
50.23333
Velvet
63.83330
Trebi
58.10000
No. 457
65.76670
No. 462
58.80000 Wisconsin No. 38
year
1931
1931
1931
1931
1931
1931
site
Waseca
Waseca
Waseca
Waseca
Waseca
Waseca
site
Waseca
Morris
Waseca
Morris
Waseca
Morris
199
variety
year
site
Svansota,No. 462:12 1932:60 Grand Rapids:10
Manchuria: 6 1931: 0
Duluth:10
No. 475: 6
Univ Farm:10
Velvet,Peatland:12
Morris:10
Glabron: 6
Crookston:10
No. 457: 6
Waseca:10
Wisc No. 38,Trebi:12
-----------------------------------------------------------year:1931
yield
variety
year
site
Min.:19.70 Svansota,No. 462:12 1932: 0 Grand Rapids:10
1st Qu.:29.09
Manchuria: 6 1931:60
Duluth:10
Median:34.20
No. 475: 6
Univ Farm:10
Mean:37.08
Velvet,Peatland:12
Morris:10
3rd Qu.:43.85
Glabron: 6
Crookston:10
Max.:65.77
No. 457: 6
Waseca:10
Wisc No. 38,Trebi:12
This provides a summary of the whole data set but in the form of two
summary tables, one for 1931 and one for 1932. Note that S-Plus displays
only seven lines by default. Therefore, the 10 dierent varieties of barley
show up in 4 categories of 1 variety and 3 categories of 2 varieties. In the
table above, Svansota and No. 462 were aggregated into one category. If
we want S-Plus to display more than just 7 lines (because we have 10
varieties), we can add the maxsum=10 setting as an additional parameter to
the by function. Similar to the call to summary above, where we entered
> summary(barley, maxsum=10)
we can now enter
> by(barley, year, summary, maxsum=10)
to obtain the same formatting. In the same way, you could calculate the
summary statistics or other gures for each site or each variety separately.
There is no need for explicit subsetting and cycling through the sites or
varieties.
200
7. Exploring Data
S-Plus function
Description
quantile
mean
median
stem
var
Quantiles of data
(Optionally trimmed) mean of data
Median of data
Stem and leaf display
Variance of data or, if two variables are supplied,
covariance matrix
Applies a function to data split by indices
Summary statistics of an object
Calculations on rows or columns of matrices and
arrays (apply), on components of lists (lapply,
sapply), and data subsets (tapply)
Aggregate data by performing a function (like
mean) on subsets
by
summary
apply, lapply,
sapply,
tapply
aggregate
S-Plus function
Description
hist2d
table
crosstabs
split
Univ
Farm
10
10
Morris
10
10
Crookston
10
10
201
Waseca
10
10
We see that every site had 10 plantings in each year. If we look at the
three-dimensional table of year, site, and variety, we can see that every
combination of year, site, and variety occurs exactly once, as we have
10 dierent varieties. We shorten the output because of its excessive length.
> table(year, site, variety)
,,Svansota
Grand
Univ
Rapids Duluth Farm
1932
1
1
1
1931
1
1
1
,,No. 462
Grand
Rapids
1932
1
1931
1
...
Duluth
1
1
Univ
Farm
1
1
Morris
1
1
Crookston
1
1
Waseca
1
1
Morris
1
1
Crookston
1
1
Waseca
1
1
202
7. Exploring Data
10 to 20 to 30 to 40 to 50 to 60 to
20
30
40
50
60
70
6
0
0
0
0
0
0
42
0
0
0
0
0
0
39
0
0
0
0
0
0
26
0
0
0
0
0
0
5
0
0
0
0
0
0
2
10 to 20
20 to 30
30 to 40
40 to 50
50 to 60
60 to 70
$xbreaks:
[1] 10 20 30 40 50 60 70
$ybreaks:
[1] 10 20 30 40 50 60 70
Because of the leading $ sign, we can see that the returned data are stored
in a list.
If we have a mixture of continuous and categorical variables and want
to tabulate them, we can build classes on our own by rounding the metric
variable. Let us round the yield to the nearest factor of 10. The round
function has, besides the data, a second argument digits, telling us the
number of digits to the right of the decimal point. If digits is negative, it
refers to the number of digits to round o to the left of the decimal point.
Rounding to the nearest factor of 10 means that we need to set digits to
1.
> yield.round <- round(yield, -1)
We can check the result by tabulating our new variable, yield.round,
which contains only the values 10, 20, 30, 40, 50, 60, and 70.
> table(yield.round)
10 20 30 40 50
1 18 51 31 13
60
5
70
1
30
4
4
8
4
5
8
5
5
4
4
40
4
2
1
3
4
3
5
3
3
3
50
1
2
1
1
1
1
0
1
2
3
60
0
0
0
0
0
0
1
1
2
1
70
0
1
0
0
0
0
0
0
0
0
203
We can see that most of the yields are around 30 bushels per acre. There is
only a single harvest in the category 70, stemming from the species No. 462,
but the species Wisconsin No. 38 and Trebi have mostly high and only a
few low yields.
Using the by function, we can investigate this a little further. Let us
calculate a summary for each variety by using the function summary.
> by(yield, variety, summary)
INDICES:Svansota
x
Min.
:16.63
1st Qu.:24.83
Median :28.55
Mean
:30.38
3rd Qu.:35.97
Max.
:47.33
...
INDICES:Trebi
x
Min.
:20.63
1st Qu.:30.39
Median :39.20
Mean
:39.40
3rd Qu.:46.71
Max.
:63.83
It seems that Trebi and Wisconsin are the more fruitful varieties, as all
gures are high in comparison to the other varieties.
If we wanted to split our data set into parts, for example to analyze each
site on its own, we could select only a single variety by entering
> barley.Svansota <- barley[variety=="Svansota", ]
to select only the Svansota data, or we could split up the data by using the
split function.
> barley.split.by.variety <- split(barley, variety)
The rst component of the list barley.split.by.variety now looks like
this:
> barley.split.by.variety$Svansota
yield
variety year site
13 35.13333 Svansota 1931 Univ Farm
14 47.33333 Svansota 1931 Waseca
15 25.76667 Svansota 1931 Morris
16 40.46667 Svansota 1931 Crookston
204
7. Exploring Data
17
18
73
74
75
76
77
78
29.66667
25.70000
27.43334
38.50000
35.03333
20.63333
16.63333
22.23333
Svansota
Svansota
Svansota
Svansota
Svansota
Svansota
Svansota
Svansota
1931
1931
1932
1932
1932
1932
1932
1932
Grand Rapids
Duluth
Univ Farm
Waseca
Morris
Crookston
Grand Rapids
Duluth
We have all the Barley data of the Svansota variety in a separate structure. The variable barley.split.by.variety contains a list in which the
element names are the dierent varieties. If you enter
> names(barley.split.by.variety)
you will see the names of the list components. We see that for Svansota,
for example, 1931 was a much better year than 1932, and that Waseca,
Crookston, and University Farm had their biggest harvest in 1931, whereas
the biggest harvests in 1932 were in Waseca, Morris, and University Farm.
205
25
Percent of Total
20
15
10
0
20
30
40
50
60
yield
Figure 7.1. Histogram of the variable yield in the Barley data set.
20
Morris
30
40
50
Crookston
60
Waseca
50
40
30
Percent of Total
20
10
0
Grand Rapids
Duluth
University Farm
50
40
30
20
10
0
20
30
40
50
60
20
30
40
yield
50
60
206
7. Exploring Data
values for the 10 dierent varieties planted. Figure 7.3 shows the histogram
display by year and site.
20
Percent of Total
60
50
40
30
20
10
0
60
50
40
30
20
10
0
60
50
40
30
20
10
0
20
30
30
40
Waseca
1932
Waseca
1931
Crookston
1932
Crookston
1931
Morris
1932
Morris
1931
University Farm
1932
University Farm
1931
Duluth
1932
Duluth
1931
Grand Rapids
1932
Grand Rapids
1931
40
50
50
60
60
50
40
30
20
10
0
60
50
40
30
20
10
0
60
50
40
30
20
10
0
60
yield
207
20
1932
30
40
1931
50
60
Waseca
Crookston
Morris
University Farm
Duluth
Grand Rapids
20
30
40
50
60
yield
Figure 7.4. Boxplot display of the barley yield split by site of harvest.
If you are not familiar with boxplot displays, a box consists of a few basic
elements: the lower quartile, the upper quartile, and the median (the dash
in the middle). Therefore, 50% of the data lie within the box limits. The
box position tells us a lot about the data spread within a site and gives
a comparison between the sites. The whiskers show the interval of values
outside the box, and values far outside are represented by horizontal dashes.
To learn more about this standard display tool, you might want to study
Tukey (1977) or other books covering descriptive statistics or exploratory
data analysis.
Many of the details in the preceding displays can be found again in this
display. We see immediately that the Waseca site has very high yields, especially in 1931, but the yields cover a much broader range than at Duluth,
for example. Grand Rapids is always pretty low, and Morris is somewhat
strange, as its yield in 1931 is very low and in 1932 it is very high, in contrast to the other sites. For the other sites, the ordering is very stable for
both years if ordered by yield size.
Alternatively, if you like the graphs better, you could use the old style
boxplot function. It generates vertical boxplots and employs a dierent
syntax: it expects a list of variables to plot.
208
7. Exploring Data
20
30
40
50
60
Grand
Rapids
Duluth
University
Farm
Morris
Crookston
Waseca
Figure 7.5. Boxplot display of the barley yield for all sites split by year of harvest,
employing the boxplot function.
Note If you try out the boxplot example, you will realize that there is a
slight disturbance with the boxplot labels. The site names are sometimes
too long and overlap. We have split the labels into two lines by storing
the result of split, a list, into a variable and renaming some labels. For
example, we can make Grand Rapids a two-liner by inserting the new
line control character \n in the middle. The command would look like this.
> x <- split(barley$yield, barley$site)
> names(x)[names(x)=="University Farm"] <"University\nFarm"
> names(x)[names(x)=="Grand Rapids"] <- "Grand\nRapids"
> boxplot(x)
209
The labels in Figure 7.5 were created in this fashion. We will learn more
about control characters later. They are summarized in Table 9.2 on
page 315.
Continuing our investigation, we can do better to graphically demonstrate that the Morris site is the only one where the yield is higher in 1932
than in 1931. The graph is generated by swapping the two conditional
variables, year and site, such that all sites in a specic year show up in a
common graph (compare Figures 7.4 and 7.6 and the commands used to
generate them).
> bwplot(yearyield | site, data=barley)
20
Morris
30
40
50
Crookston
60
Waseca
1931
1932
Grand Rapids
Duluth
University Farm
1931
1932
20
30
40
50
60
20
30
40
50
60
yield
Figure 7.6. Boxplot display of the barley yield for all years by site.
Figure 7.6 clearly reveals that the Morris site is the only one that had a
lower yield in 1931 than in 1932. For all other sites, the upper box is clearly
more towards the right than the lower box, indicating a higher yield in 1931.
Using these Trellis displays, Cleveland (1993) and Becker, Cleveland, and
Shyu (1996) conclude that the years 1931 and 1932 must have been swapped
for Morris. Of course, this can no longer be proven, but there is some clear
evidence for it. Although this data set has undergone many analyses in the
statistical literature, this obscureness had not been detected, which shows
how graphical methods can help provide more insight into a data set.
210
7. Exploring Data
To illustrate further graphical displays and how to use them, we will now
take a look at another data set that comes with S-Plus: the Geyser data
set. You can look at it by typing geyser at the command prompt.
The Geyser Data Set
The Geyser data consist of continuous measurements of the eruption length
and the waiting time between two eruptions of the Old Faithful Geyser in
Yellowstone National Park in 1985 in minutes. Some duration measurements, taken at night, were originally recorded as S (short), M (medium),
and L (long). These values have been coded as 2, 3, and 4 minutes, respectively. The original publication by Azzalini and Bowman (1990) provides
more details.
A look at the geyser data (have a look!) shows that they are stored in
a list with two components: waiting and duration. Lists can be attached
such that we can use waiting instead of typing geyser$waiting all the
time.
> attach(geyser)
We might want to begin by examining the basic data characteristics.
> summary(waiting)
Min. 1st Qu.
43.00 59.00
> summary(duration)
Min. 1st Qu.
0.8333 2.0000
Median
76.00
Median
Mean 3rd Qu.
Max.
4.0000 3.4610 4.3830 5.4500
211
2, 3, and 4 minutes, such that quite a few points lie on these lines parallel
to the x-axis. Second, it seems that the data also lie on lines parallel to the
y-axis. Examining the data conrms that waiting time was measured in
integer minutes.
Old Faithful Geyser Data
Waiting Time and Eruption Length
Eruption Length
1
50
60
70
80
90
100
110
212
7. Exploring Data
# vertical line
# horizontal line
2 The main statement should be on a single line, with no space around the \n, which
is impossible here due to space constraints.
213
duration
C
C C
CC CC
C
C
C
C
B
C C CCC CC C C B
C CC
C CC
C
CCC C C
B
CC C
B BBB B
B
C
C
C
C CC CCCCCC
B B BBB B B
BB B
CC CC
B
B
CC C C
B
B
C
B
B
BB
B BBB B
B
B
C CC
C
B
CC CC
BBBBBBB BBBBB
BB B
CC C
CCC
CCC CC
BB
B
B
C
B B B B BB
C
B BBB
B B
B
B
B
B
A
A A A A
A
A
A A AA A
AAA A
AAA
AA
A
AAAA A
A A A AA
AA
AAA
AA
A
A
A
A
A
A
A
A AAA
AAA
A AA
AA
AAA A
AAAAAAAAA AA
AA A
A
50
60
70
80
extreme value
90
100
110
waiting
We will now continue with a graphical functions that can deal with multiple variables: multivariate graph types. You can quickly refer back to
Table 6.1 (p. 157) for the available Trellis-type graphics functions.
We have already seen that the function hist2d is useful for tabulating
continuous data. If we apply it to our Geyser data set, we get back a list
with ve components, telling us the mid-values of the chosen intervals and
the end points, together with the tabulated data.
> hist2d(waiting, duration)
$x:
[1] 45 55 65 75 85 95 105
$y:
[1] 0.5 1.5 2.5 3.5 4.5 5.5
$z:
40
50
60
70
to
to
to
to
50
60
70
80
0 to 1 1 to 2 2 to 3 3 to 4 4 to 5 5 to 6
0
0
0
0
16
0
0
0
0
1
56
3
0
0
1
2
28
1
0
13
19
9
40
0
214
7. Exploring Data
80 to 90
90 to 100
100 to 110
$xbreaks:
[1] 40 50
1
0
0
60
31
11
1
70
80
25
3
0
9
2
0
24
3
0
0
0
0
90 100 110
$ybreaks:
[1] 0 1 2 3 4 5 6
The table is put into the component z, and we see that the visibility of
eects depends heavily on the intervals chosen. The eect of the three
point clouds is not that clearly visible, although you might guess at them
by studying the values. Experimenting with dierent choices of interval
bounds might give more insights.
You can use the hist2d function introduced before to set up data to be
used as input to other functions. The persp function produces a 3D-type
plot that accepts input from hist2d.
Z
50 60
0 10 20 30 40
5
4
90
3
Y
100
80
2
1
60
70 X
50
Figure 7.9. A perspective plot of the Old Faithful Geyser data using the persp
function.
Figure 7.9 shows a perspective plot of the surface of the empirical distribution. The x-axis and y-axis are set up by taking the variables xbreaks
and ybreaks of the hist2d output, and the matrix of values in the z component determines surface height. Try to reveal the three peaks more clearly
215
hist
y
x
Figure 7.10. A perspective plot of the Old Faithful Geyser data using the
wireframe function.
You can use the same data from hist2d to get an image display of the
surface. The image display and the corresponding S-Plus function image
are often used to display geographic data, using latitude and longitude as
axes, and the geographical height of the area is color-coded. This is the
principle of every contour map.
We use the color display to look at the tabulated Geyser data from the
top, getting the heights in dierent colors.
> image(hist2d(waiting, duration,
xbreaks=seq(40, 110, by=5), ybreaks=seq(0, 6, by=0.5)))
> title("An image plot of the Geyser data")
For a nicer display, we chose dierent classes than the default settings
used in the perspective plot. The result is shown in Figure 7.11, and although the loss of the coloring in this picture is a disadvantage, we can
still see the three mountains as well as the desert area in the lower left
216
7. Exploring Data
corner. Other options available are dierent color schemes, such as a heat
color scheme ranging from cold (blue) to hot (red).
40
60
80
100
Figure 7.11. An image plot of the Old Faithful Geyser data using image.
10
15
30 20
40
30
40
50
10
217
eruption time
3
eruption time
3
20 15
15
2520
10
20
10
30
50
60
70 80 90
waiting time
100
40
50
60
70 80 90 100 110
waiting time
Figure 7.12. A contour plot of the Old Faithful Geyser data using contour.
218
7. Exploring Data
5
14
16
18 12
108
20 14
8
12 864
1420
16
18
10
4
20 15
3
2
12 108 4
1412 6 2
16
14 8
eruption length
eruption length
15
10
15
2520
10
1
1
40 50 60 70 80 90 100 110
40 50 60 70 80 90 100 110
waiting time
waiting time
Figure 7.13. A contour plot of the Old Faithful Geyser data using contourplot.
The layout is achieved by calling the print function explicitly and providing it with the split and more arguments to specify layout and position
of the current graph as well as whether or not there is more to come.
> print(contourplot(histx*y, data=z2grid,
xlab="waiting time", ylab="eruption length"),
split=c(1,1,2,1), more=T)
> print(contourplot(histx*y, data=z3grid,
xlab="waiting time", ylab="eruption length"),
split=c(2,1,2,1), more=F)
The function contourplot accepts more parameters than the ones we
specied. Setting region=T underlies the contours with a level plot, for
example.
This concludes our tour of exploring the Barley and Geyser data sets. We
have seen various displays of the data and gained some insight into what
information it contains. You might now want to proceed with pursuing
questions you might still have about the data or you could now go ahead
and look at some of your own data.
To round o, the following section references some more graph types:
dynamic graphics and old-style graphics (non-Trellis).
7.2.1
219
S-Plus function
Description
brush
spin
7.2.2
Old-Style Graphics
Occasionally one might want to use S-Plus old-style graphics. The Trellis
concept has extended the functionality and oers better visualization (for
example, by having axis tickmarks on all four pages of a graph). It can
nevertheless come in handy to know about alternatives, and some of them
are given in Table 7.4 and Table 7.5. The Trellis-type functions were given
in Table 6.1 (p. 157).
Most of the graphics functions have a long list of optional parameters.
It is, of course, possible to use a function only in its elementary form (like
pie(x)), but if you want to add colors, modify the labels, or explode one of
the pies in the pie chart, you need to know what you can do. The boxplot
function, for example, currently has 27 optional parameters.
Table 7.4. Old-style graphics functions.
S-Plus function
Description
boxplot
density
Boxplot display
Density estimate for the distribution of the data.
Use plot(density(x)) to see it graphically
Histogram display
Identication of the data point next to the point
clicked on in the graphics window
Pie chart
Standard plot, depending on the data
hist
identify
pie
plot
220
7. Exploring Data
Table 7.5. Old-style multivariate graphics functions.
S-Plus
function
Description
221
Type1 character
Function type
d
p
q
r
Distribution function
Probability function or cumulated density function
Quantile function (inverse probability function)
Random number generation
222
7. Exploring Data
Distribution
S-Plus name1
binomial
geometric
hypergeometric
negative binomial
Poisson
Wilcoxon (rank sum)
binom
geom
hyper
nbinom
pois
wilcox
Parameters
size
prob
m, n, k
size
lambda
m
prob
prob
n
The characteristic letter, one of d, p, q, r, plus the listed name, gives the
name of the S-Plus function.
Use the sample function to generate random numbers from a discrete
distribution.
Table 7.8. Distribution-related functions in S-Plus: Continuous distributions.
Distribution
S-Plus name1
Parameters2
beta
Cauchy
chi-square
exponential
F
gamma
logistic
lognormal
normal
normal,
multivariate3
stable
Student t
uniform
Weibull
beta
cauchy
chisq
exp
f
gamma
logis
lnorm
norm
mvnorm
shape1
location=0
df
rate=1
df1
shape
location=0
meanlog=0
mean=0
mean=rep(0,d)
shape2
shape=1
stab
t
unif
weibull
index
df
min=0
shape
skewness=0
df2
scale=1
sdlog=1
sd=1
cov=diag(d)
max=1
The characteristic letter, one of d, p, q, r, plus the listed name, gives the
name of the S-Plus function.
2
If parameters are preset with a default value, as in mean=0, they do not
need to be specied unless a value other than the default is desired.
3
There is no quantile function for the multivariate normal distribution.
Note The standard deviation, not the variance, is the argument used
for dispersion by the functions for the normal distribution. The standard
deviation is simply the square root of the variance.
All these functions take numbers and vectors as input arguments. For a
vector, the corresponding value for each element is computed. For example,
223
Graphing Distributions
You will often want to see what a distribution looks like, which usually
means looking at the density function. Here, we need to make a distinction
between continuous and discrete distributions.
Recall that a continuous density has a continuous support; that is, all
values of the density function in a given interval are greater than zero. This
interval usually ranges from minus innity to plus innity. For this reason,
we cannot graph the whole distribution from minus to plus innity, but
have to restrict ourselves to a part of it. A good idea is to plot, for example,
90% of the density support.
We nd by using qnorm(c(0.05, 0.95)), qt(c(0.05, 0.95), 5), and
qcauchy(c(0.05, 0.95)) that the interval from 6 to +6 covers more
than 90 percent of all the three densities. Similarly, we can determine that
the densities for all distributions at x = 0 all lie below 0.4.
We can therefore use the function xyplot and initialize it by plotting
the vector of 0 and 0.4 on the y-axis against 6 and +6 on the x-axis. We
now need to write a panel function that graphs our desired densities. Note
that the values passed to xyplot are never plotted as the panel function
ignores them.
xyplot(c(0, 0.4)c(-6, 6),
main="A graphical comparison of distributions",
ylab="Density", xlab="", npoints=1000,
panel=function(..., tdf=5, npoints=100)
224
7. Exploring Data
{
args <- list(...)
x <- seq(args$x[1], args$x[2], length=npoints)
lines(x, dnorm(x), col=1, lty=1)
lines(x, dt(x, tdf), col=2, lty=2)
lines(x, dcauchy(x), col=3, lty=3)
# add the key/legend
# end panel
# end xyplot
The legend (using the key function) should be created inside the panel
function to ensure it uses the current graph device settings. We specify
lines and their types (solid, dashed) and colors, the text and its attributes
(left-adjusted (adj=0), and the character expansion is set to cex=0.6). The
result is shown in Figure 7.14.
For a discrete distribution, the density can only be calculated at discrete
points. For example, consider the distribution function of the binomial
distribution the function describing the probability of x heads out of
n coin tosses, where the probability of a head for a single toss is p. The
distribution function is greater than zero only at points 0, . . . , n. To graph
this adequately, we plot vertical height bars at the discrete points. The
result is shown in Figure 7.15.
>
>
>
>
>
n <- 20
# number of coin tosses
p <- 0.6
# probability for heads
x <- 0:n
y <- dbinom(x, n, p)
# prob. for 0..n heads
xyplot(yI(0:20), type="h",
xlab="", ylab="Density Value",
main=paste("Density of the binomial(", n, ", ", p,
") distribution", sep=""))
Notice the usage of the function I: the function xyplot interpretes the
statement y0:20 as a formula, resulting in an error. Adding I around
0:20 leaves the argument unchanged. (If we were using the older plot
function, we could have entered plot(0:20, y).)
225
0.4
Normal(0,1) Distribution
t(5) Distribution
Cauchy Distribution
Density
0.3
0.2
0.1
0.0
6
226
7. Exploring Data
Density of the Binomial(20, 0.6) distribution
Density Value
0.15
0.10
0.05
0.0
0
10
15
20
227
What is the output? S-Plus tells us that we calculated the paired t-test
and gives us the value of the t-distributed test statistic, 0.6595. Based on
19 degrees of freedom (20 sample pairs minus 1 degree), this results in a
p-value of 0.5175. For a p-value smaller than = 0.05, we would have
concluded that we should reject the hypothesis.
In our case, however, we decide that we cannot reject the null hypothesis,
as there is not enough evidence that the mean yields from Crookston and
Morris dier from one another. This means that the estimated dierence
in yields of 2.02 is not statistically signicant from zero. In fact, the 95%
condence interval of (4.39, 8.43) indicates where we expect the true mean
dierence to lie. Note that the condence interval contains the mean value
of zero (i.e., no dierence between the two sites).
Earlier, we found some evidence that at the Morris site, the years 1931
and 1932 had been accidentally swapped. This makes us wary of deciding
to use a paired t-test. You can check on your own how large the dierence
is, if an unpaired t-test is carried out that does not assume related pairs of
data.
0.4
t density
0.3
0.2
acceptance region
0.1
test statistics
paired unpaired
rejection
region
rejection
region
0.0
3
228
7. Exploring Data
The middle range shows the acceptance region, where the hypothesis cannot
be rejected, and the two values of the t-test for the paired and unpaired
samples have been added. Note that we get dierent degrees of freedom
for the two t-tests, 19 for the paired and 38 for the unpaired test, but the
dierence in the graphical display is almost invisible.
Refer to the manual for further details about testing, the background of
testing, and the detailed usage of S-Plus functions. The available tests are
given in Table 7.9. If you are familiar with a test, studying the reference
manual or the online documentation should provide the information you
need to carry out the testing.
Table 7.9. Statistical tests in S-Plus.
S-Plus function
binom.test
chisq.gof
chisq.test
cor.test
fisher.test
friedman.test
kruskal.test
ks.gof
mantelhaen.test
mcnemar.test
prop.test
t.test
var.test
wilcox.test
# greater than
# less than
where alpha is typically 0.05 or 0.01, and m and s are the mean and standard
deviation of the hypothesis. If the test is two-sided, the critical values are
the /2 and 1 /2 quantiles of the test statistic distribution. We would
calculate the following critical values for the normal distribution:
229
230
7. Exploring Data
231
We calculate the test statistic for the paired and unpaired tests next and
access and store the t-statistics directly. See the manual pages to look at
what is returned from t.test. Next, we plot the points of the test statistic
and add text to it, once right-justied and once left-justied, such that we
get a back-to-back eect.
Finally, we add the text for the rejection regions, adjusted to the borders,
and draw the line showing the acceptance regions limits by creating a
vector with four points, which covers the polygon line. On top of it, we add
a string describing the interval line.
This code shows the basic elements of tests, critical values, and acceptance and rejection regions for those of you not so familiar with such ideas.
As a bonus, you now have some ideas of how exible S-Plus graphs are,
and some tricks to use in your own graphs.
Notice nally that we attempted to keep the graph exible in that you
can modify the sequence of values, the data used, and the alpha level. The
graph will still produce a reasonable placing of graph elements such as text,
dots for test statistics, lines added, and so forth.
232
7. Exploring Data
7.5.1
7.5.2
Most S-Plus functions do not perform their intended action by simply ignoring missing values, if there are any. However, this is frequently exactly
what you want to do; that is, you often want to calculate the mean or variance over just the nonmissing data, but to assure that nothing unintended
happens, you must set a parameter to do it. This follows the rule that an
operation on data with missing values will generate a missing value (NA) as
the result (this can be very helpful in error tracing).
Functions like mean and median have an option na.rm. By default, it is set
to F, indicating that NA values will not be removed prior to the calculation.
If you set na.rm=T though, the operation will be performed after excluding
all NA values. Here is a short example:
> x <- c(1,2,NA,4)
> mean(x)
NA
> mean(x, na.rm=T)
2.333333
In the latter expression, the S-Plus function removes all missing data and
performs the action on the remaining data. The result is that the mean
function divides the sum of data in the example above by 3 and not by 4,
the total length of x.
The same eect is obtained by removing the missing data by hand, as in
> mean(x[!is.na(x)])
The functions var to calculate the variance and cor to calculate the correlation of variables oer several ways to handle missing values. The option
na.method is set to the default "fail", but it can be set to "include",
233
"omit", or "available". You might want to look up the documentation for var to see what the dierence between na.method="omit" and
na.method="available" is. The functions react according to the setting
of the parameter.
Functions like lm (for linear models), glm (for generalized linear models),
or tree (to calculate a regression tree) have a parameter na.action that
is set to na.fail per default. This parameter setting (in contrast to the
parameter na.method for var and cor) is not specied as a character string:
it species a function. You can therefore specify your own function to handle
the missing values or use one of the predened ones such as na.fail or
na.exclude.
If you want to exclude the missing values before calculating the mean and
the variance of a data set, the options to specify have dierent names because they allow to take dierent actions. One could use mean(x, na.rm=T)
and var(x, na.method="omit"), for example. In contrast to var, the function stdev accepts the parameter na.rm similarly to mean because it will
always return the standard deviation of the entire data set, even if it
contains multiple columns.
7.5.3
Missing values are ignored in graphical routines except for special cases such
as when you want to plot x, which consists only of missings (this would
return an error message). So, for example, the plot and lines functions
ignore all points where one of the coordinates is an NA.
Note Using NA values can be useful if you want to plot an interrupted line.
Example:
> plot(1:10, c(2,3,2,4,NA,3,2,2,4,5), type="l")
creates a line that is not connected between the points (4, 4) and (6, 3).
7.5.4
Innite Values
234
7. Exploring Data
across Inf, it interprets it as innity. You can check for innite values by
using is.inf.
Sometimes, it might be necessary to check for missing and innite values
separately because an innite value is not missing and vice versa. See the
following example.
> x <- (-1:1)/0
> x
-Inf NA Inf
> is.na(x)
F T F
> is.inf(x)
T F T
One can check if a value is nite by using is.finite.
> is.finite(x)
F F F
Have a look at the function is.finite. You will see that it uses other
functions. Have a look at them as well.
If you need the indices of the NA values in a vector, you could create a
sequence of the indices and extract only the indices where is.na returns a
TRUE.
> seq(x)[is.na(x)]
2
Alternatively, there is the function which.na which does exactly the same
and returns the indices of all NA values.
> which.na(x)
2
Similarly, the function which.inf returns the indices of all innite values.
> which.inf(x)
1 3
There are some simple arithmetic rules for innite values. You can try
exploring them on your own or go through the exercises that follow.
7.6. Exercises
235
7.6 Exercises
Exercise 7.1
Generate 100 and 1000 random numbers, both samples from a normal
distribution with mean value 3 and variance 5. Draw histograms with bin
width 0.5, 1, and 2 for each of the two samples.
Remember that all gures have exactly the same underlying distribution.
Plot them all in a single graphics window and label them accordingly. What
is visible ?
Exercise 7.2
Create a vector containing three real numbers, one missing value (NA),
and one innite value (Inf). Calculate the mean and variance of the nonmissing data. Figure out how the missing and innite data are treated in
a simple plot and histogram. What happens if you divide x by itself ? Can
you explain this behavior ?
Exercise 7.3
Do an analysis of the Car data set provided with S-Plus. The data
set, stored under car.all, contains a set of cars plus a variety of their
attributes, such as fuel consumption, price, and many more.
Before starting the analysis, determine which car models are contained
in the data and what variables are given in the data set. Extract the fuel
consumption (miles per gallon) and determine for each type of car (small,
medium, compact, large, van, sporty) the model with the lowest and the
highest fuel consumption.
On the basis of average fuel consumption and tank size, calculate the
maximum travel distance without a gas rell.
Take the variables maximum travel distance, tank size, mileage, and
horsepower, and display them graphically. Try to determine, using only
graphics, the two variables with the strongest dependency.
Check your guess by calculating the correlation matrix for these
variables.
Exercise 7.4
Test the S-Plus random number generator with the dice test. Simulate
a number of die rolls and check if they all occur with approximately the
236
7. Exploring Data
same frequency. In other words, we need to simulate a die roll, where the
six possible outcomes (1, 2, 3, 4, 5, and 6) occur with the same probability.
6 (ni n/6)2
Calculate the test statistic T =
, which is known to
i=1
n/6
have an asymptotic chi-square distribution with 6 degrees of freedom. Use
S-Plus to calculate the critical value for the Chi-square distribution at the
5% level. Can you reject the hypothesis that all numbers occur with the
same frequency?
Repeat this exercise with dierent numbers of rolls of the die.
Exercise 7.5
In this exercise, we will use simulation to examine a statistical hypothesis
test. We want to test the null hypothesis that the mean of a normally
distributed sample is 0, with known variance 1. Let the alternative be that
the mean is not equal to 0 (two-sided).
We know that if we set the signicance level to (alpha), the test will
reject the null hypothesis % of the time (for a large number of samples),
although the data actually come from a normal distribution with mean 0
and variance 1. Show this by generating 1000 samples of size 100, calculating for each sample whether or not the null hypothesis is rejected and
summarizing how often the null hypothesis was rejected.
In a second step, examine the power of the test: the probability of rejecting the null hypothesis given that the alternative is correct. Simulate a
data set coming from the alternative hypothesis that the mean is equal to
0.1, and estimate the power of the test based on the sample.
Exercise 7.6
Suppose we had forgotten the formula for computing the area of a circle.
All is not lost. We can easily use the computer to approximate a circles
area by simulation. For convenience, we want to determine the area of the
unit circle with radius 1 and name this unknown constant (pi).
We know that the area covered by a square with the corner points (1, 1),
(1, 1), (1, 1), and (1, 1) equals 4, as the length of each side is 2. This
square completely contains the unit circle. If we had a random point somewhere in the square, we could calculate the probability that the point is
also inside the unit circle. A random point Z is determined by two coordinates, say X and Y , which have independent uniform distributions on the
interval [1, 1].
The probability of Z being inside the circle is given by the ratio of the
circles area and the squares area or, more mathematically:
7.6. Exercises
237
238
7. Exploring Data
7.7 Solutions
Solution to Exercise 7.1
This exercise illustrates a limitation of the histogram display as such: it depends on the selection of bins. The histogram counts for each bin (interval)
given how many observations fall into that particular bin. The frequencies
(absolute or as a percentage of all observations) are graphed as bars with
the heights of the bars corresponding to the frequencies.
We generate two samples from the same distribution (with dierent sample sizes). Both samples are displayed in three histograms with varying bin
widths. The gure created is shown in Figure 7.17.
6 Histograms with the same underlying distribution
n=1000
bin width=0.5
n=1000
bin width=1
10
n=1000
bin width=2
15
30
10
20
10
4
2
0
0
5
10
0
5
n=100
bin width=0.5
10
n=100
bin width=1
25
15
0
0
10
n=100
bin width=2
10
5
5
10
20
10
30
15
40
20
10
0
5
10
10
Comparing the graphs in the same row in Figure 7.17, the data are always
exactly the same, but the parameter bin width for the histogram is changed.
If you compare two graphs in the same column, the dierence comes from
the sample size (100 versus 1000) and, of course, from stochastic variation.
We set up a short program that uses two lists, n and binwidths, to
generate the corresponding samples, bins, and histograms. You just need
to add another number to n or binwidths to add further histograms.
7.7. Solutions
239
240
7. Exploring Data
7.7. Solutions
241
> mean(y)
2
> var(y)
1
Alternatively,
> y <- x[is.finite(x)]
gives the same results.
Now, lets divide x by itself:
> x/x
1 1 NA 1 NA
Note that a missing value divided by anything else results in a missing
value again, and innity divided by innity also results in a missing value.
"Width"
"Frt.Leg.Room"
"Luggage"
"Turning"
"Gear.Ratio"
"Dist.n"
"Disp2"
"Eng.Rev2"
"Mileage"
"Height"
"Rear.Seating"
"Weight"
"Disp."
"Eng.Rev"
"Tires2"
"HP.revs"
"Price"
"Type"
Since we have already attached the data set, we can access the variables
just by their names. For example, entering HP returns the HP data for all
the cars in the data set.
The names of the cars in the data set are returned by the names of the
rows of the data matrix. Therefore, we enter
> dimnames(car.all)
to see the car names and the variable names, or just
> dimnames(car.all)[[1]]
242
7. Exploring Data
to get only the rst component of the list, the names of the cars. Now, we
save the names of the cars for later use.
> car.names <- dimnames(car.all)[[1]]
Next, we calculate the maximum travel distance for all cars.
> max.travel <- MileageTank
To extract the categories and the car with the longest maximum travel
distance for each category, we need to gure out what the categories are
and use the by function to do the calculation by category. We rst gure
out that the variables name for the category of car is Type. We can use
the summary command to view the data structure.
> summary(Type)
Compact Large
19
7
Medium
26
Small
22
Sporty
21
Van
10
Nas
6
We see that there are only 7 cars classied as large in the data, 6 have
no classication, and there are more than 20 cars in the groups Medium,
Small, and Sporty.
Determining the cars with the highest fuel consumption (or the lowest
mileage) in the dierent categories is done by using min in combination
with by as follows:
> by(Mileage, Type, min, na.rm=T)
Compact Large Medium Small Sporty
21
18
20
25
24
Van
18
Note that we need to supply the parameter na.rm=T to the min function,
as there are missing values in the data. The minimum will be missing if
any of the data are missing.
In the same fashion, we obtain the maximum values.
> by(Mileage, Type, max, na.rm=T)
Compact Large Medium Small
27
23
23
37
Sporty
33
Van
20
You might have produced slightly dierent output, because we used the
command unclass(by(...)) to obtain this shortened output.
A rst display of the data can be generated using splom. To analyze the
four variables HP, Tank, Mileage, max.travel, we create a new data set
containing only these data.
> car.subset <- data.frame(HP, Tank, Mileage, max.travel)
> splom(car.subset)
The scatterplot matrix (splom), a matrix of scatterplots for each combination of two variables, is shown in Figure 7.18. We immediately see strong
relationships between many variables, such as between HP and tank size or
between tank size and mileage in a negative way: the smaller the mileage,
the larger the tank, and vice versa.
7.7. Solutions
400
450
243
450
400
max.travel
350
300
300
30
35
350
35
30
Mileage
25
20
20
20
25
25
25
20
Tank
15
10
200
15
10
250
250
200
HP
150
100
100
150
Figure 7.18. Some components of the Car data set (horsepower, tank size, mileage,
and maximum travel distance).
244
7. Exploring Data
The function is.na is useful for checking if any data are missing, and if
you remember that TRUE values are coded with 1 and FALSE values with 0,
then it becomes clear what to do. We sum up the TRUE and FALSE values
obtained from is.na and see if the sum is 0 (indicating that none of the
data are missing) or greater than 0 (indicating that one or more variables
are missing).
> problems <- apply(is.na(car.subset), 1, sum)!=0
Instead of using apply, we can also use rowSums, which is typically faster
than apply.
> problems <- rowSums(is.na(car.subset))!=0
Alternatively, we can check for each row (dimension 1) of the matrix if any
NA is present.
> problems <- apply(is.na(car.subset), 1, any)
Finally, we remove the rows that might cause problems and reduce the data
set to the observations with no missing values.
> car.without.missings <- car.subset [!problems, ]
All rows (all cars, as each row denotes a car) are removed from car.subset
and the new data set is stored in car.without.missings.
n <- 6000
rolls <- sample(1:6, n, replace=T)
rolls.count <- table(rolls)
rolls.count
# the tabulated outcome
1
2
3
4
5
6
988 1028 988 1062 944 990
7.7. Solutions
245
Finally, we need to have the critical value to compare it to the test statistic.
We can derive the critical value for three signicance levels simultaneously.
> alpha <- c(0.9, 0.95, 0.99)
> qchisq(alpha, 6)
10.64464 12.59159 16.8119
Our test statistic is smaller than all three critical values of the chi-square
distribution, such that we cannot reject the hypothesis that we generated
rolls with a fair set of dice. In fact, the lowest alpha level at which the
hypothesis could be rejected (the p-value) is calculated as
> 1-pchisq(test.stat, 6)
0.2271776
This result helps give us condence that the S-Plus random number
generator is doing a good job.
246
7. Exploring Data
7.7. Solutions
247
Estimate
Error
100
1000
10,000
100,000
2.96
3.088
3.1532
3.1366
0.182
0.0535
0.0116
0.00499
To visualize what we just simulated, we plot the square and the circle
and add the simulated points to it. The result might look like Figure 7.19,
depending on the sequence of random numbers.
xyplot(c(-1,1)c(-1,1),
248
7. Exploring Data
Estimation of pi using simulation
1.0
0.5
0.0
0.5
1.0
1.0
0.5
0.0
0.5
1.0
# end xyplot
7.7. Solutions
249
Notice that we use 10 points as the default (in the denition of the panel
function). The value is overridden with 1000 in the call to xyplot, as the
Trellis functions pass all arguments to the panel function (which is why
one should use ... as parameter in the declaration of a panel function).
8
Statistical Modeling
8.1.1
Regression
For our example, we will use the Swiss fertility data set that is directly
accessible from within S-Plus by typing swiss.fertility and swiss.x.
We chose this data set because it is easy to understand and work with. The
data come from French-speaking provinces in Switzerland from about the
year 1888. Table 8.1 describes the variables.
252
8. Statistical Modeling
Table 8.1. Swiss fertility data set.
Variable
swiss.x
Agriculture
Examination
Description
Education
Catholic
Infant Mortality
swiss.fertility
We will restrict our discussion here to only one predictor variable, education, so we will not use the entire swiss.x matrix. The outcome of interest
is the vector swiss.fertility. Linear regression (simple or multiple) is
done using the lm function. All you have to supply is the outcome variable
followed by the predictor variable. For simplicity, we put the outcome and
predictors together into a single data frame. This avoids having to copy
individual vectors out of the swiss.x matrix.
> swiss <- data.frame(swiss.fertility, swiss.x)
> dimnames(swiss)[[2]][1] <- "Fertility"
> dimnames(swiss)[[2]]
"Fertility" "Agriculture" "Examination" "Education"
"Catholic" "Infant.Mortality"
> fit <- lm(FertilityEducation, data=swiss)
> summary(fit)
Call: lm(formula = Fertility Education,
data = swiss)
Residuals:
Min
1Q Median
3Q
Max
-17.04 -6.711 -1.011 9.526 19.69
Coefficients:
(Intercept)
Education
t value Pr(>|t|)
37.8357
0.0000
-5.9536
0.0000
253
Correlation of Coefficients:
(Intercept)
Education -0.7558
It is convenient to put the t of the model into a variable because, as we
will see later, model building and diagnostic routines use the information
contained in it.
Notice that the rst part of the output from the t of the regression
model contains basic summary information about the residuals. This is
one simple way of checking the t of the model. If the residuals are skewed
in any way, then chances are that more formal diagnostic techniques will
conrm that the model does not t well.
The next section of the output contains information on the coecients
(i.e., the specic terms included in the model). Included are the estimates
of the coecients, standard errors, t-statistics, and p-values with which you
can assess the t of any of the specic terms in the model. An intercept is
included in the model by default, but this may be removed (add a 1 as
a predictor variable). Note that in the t of the model, both the intercept
and education are statistically signicant. In practical terms, this means
that the education level is signicantly related to the fertility level. Further, the coecient of education is negative (0.8624), implying that an
increase in education is associated with a decrease in fertility. With simple linear regression, you can take the square root of the R-squared value
(here, 0.4406) to obtain a correlation of 0.6638. The cor function yields
the same result:
> cor(swiss$Education, swiss$Fertility)
-0.6637889
The summary function also gives the residual standard error, R-squared, and
the overall F -test. The residual standard error is used in the calculation
of many regression tests, R-squared can be helpful in assessing the degree
of association between outcome and explanatory variables, and the overall
F -test is used for the entire model, as opposed to each separate factor.
8.1.2
Regression Diagnostics
At rst glance, the regression model above appears to t the data quite
well. However, as part of a proper and thorough regression analysis, you
should check that the model assumptions are met. You can easily do this
by examining the residuals that are contained in the object fit. Four easy
graphs to assess the model t and check the assumptions are a scatterplot
of the data, a histogram of the residuals, a scatterplot of the residuals
versus the tted values, and a QQ-plot of the residuals. The commands for
obtaining these graphs are shown below. The graphs appear in Figure 8.1.
Fertility
40 50 60 70 80 90
0 2 4 6 8 10 12
8. Statistical Modeling
resid(fit)
0
10
10
40
50
50
60
70
80
fitted(fit)
20
10
0
resid(fit)
10
20
20
30
40
Education
resid(fit)
0
10
10
20
10
254
20
2
1
0
1
2
Quantiles of Standard Normal
par(mfrow=c(2, 2))
attach(swiss)
plot(Education, Fertility)
detach()
abline(fit)
255
> abline(h=0)
The last plot is a QQ-plot with the expected line to tell us whether the
residuals follow a normal distribution.
> qqnorm(resid(fit))
> qqline(resid(fit))
The scatterplot of the data shows that there is indeed an approximately
linear relationship between the two variables, which is accentuated by the
estimated regression line that has been superimposed. A histogram of the
residuals would ideally look like the bell-shaped curve of the normal distribution. The histogram here, with a little imagination, is not so far away
from that ideal. The scatterplot of the residuals versus tted values (values
predicted from the regression equation) should show no discernable pattern. A lowess curve has been added to help detect whether or not hidden
(or obvious) trends exist in the residuals. The at curve indicates that perhaps the assumptions of the linear regression are met. A last check is to use
a QQ-plot to assess whether the residuals follow an approximately normal
distribution. The superimposed straight line is what we would expect if the
residuals followed a normal distribution exactly. What we nd is that our
residuals, although not perfectly following a normal distribution, are not
that far away from the line.
256
8. Statistical Modeling
Table 8.2. Selected statistical models in S-Plus.
Function
Type of model
lm
aov
glm
gam
loess
tree
nls, ms
manova
factanal
princomp
crosstabs
survfit
coxph
Linear regression
ANOVA
Generalized linear models
Generalized additive models
Local regression smoother
Classication and regression tree models
Nonlinear models
Multivariate ANOVA
Factor analysis
Principal components analysis
Cross-classications
KaplanMeier survival curves
Cox proportional hazards models
with the crosstabs function. Finally, for our short list, the most common
survival analysis procedures can be done using the survfit and coxph
functions.
8.4. Regression
257
S-Plus has several more advanced ways of specifying formulas for models.
They are listed, along with those already discussed, in Table 8.3.
Table 8.3. Model syntax.
Terms included
Functionality
A+B
A:B
A*B
A%in%B
A/B
AB
Am
8.4 Regression
We now show, very briey, how to use elementary regression functions.
We include in the descriptions some commonly used modeling and modelbuilding techniques. The nonparametric regression functions, in particular,
are one of S-Plus major strengths, and you should consult the manual for
detailed explanations regarding their use.
There are, in principle, two types of model ts in S-Plus, which are
categorized by what is returned. Functions auch as linear regression return
the parameters of the t; that is, they return what is needed to determine
the functional form. On the other hand, the nonparametric functions auch
as ksmooth (kernel smoothing) return a list of points, x and y, where y
contains the smoothed data at points x, as there is no explicit functional
form for these types of smoothers.
For the rst type of model, it is helpful to do the t with, for example,
lm,
> fit <- lm(yx)
graph the data with
> plot(x, y)
and then add the tted line with the command
> abline(fit)
These commands were shown in Section 8.1.2.
In the second case, where the return argument consists of a list with x
and y components (and others), you enter the following commands instead:
> fit <- ksmooth(yx)
258
8. Statistical Modeling
> plot(x, y)
> lines(fit)
8.4.1
t value Pr(>|t|)
37.8357
0.0000
-5.9536
0.0000
8.4. Regression
259
t value Pr(>|t|)
31.5636
0.0000
-6.0968
0.0000
3.7224
0.0006
RSS
4015.236
3953.270
3549.610
3053.627
3123.989
Cp
4372.145
4488.635
4084.975
3588.992
3659.354
260
8. Statistical Modeling
Applied to our data, we see that with the largest sums of squares and the
lowest Cp value, the variable Catholic actually does provide the best t
when one variable is added to a model that includes Education.
On the other hand, we could have chosen to use stepwise regression with
the step function or to add a lot of variables and remove them using the
drop1 (opposite of add1) function. The choice between using add1, drop1,
or step is left entirely up to you. In general, they will yield the same model,
but often in practice, they do not. So, have fun.
S-Plus oers a host of residual checking techniques. Feel free to use
the simple guidelines shown in Section 8.1.2, but you can do some more
advanced work by simply plotting the t from the regression. If you simply plot the t of the regression model, you obtain six graphs, with one
diagnostic plot per graph: residuals versus tted values, square root of the
absolute of the residuals versus tted values, response versus tted values,
normal QQ-plot of residuals, r-f spread plot, and Cooks distances.
If, for convenience, you prefer to view them all at once on one page, enter
the mfrow option as follows to set up the graphical area for a matrix layout
of graphs:
> par(mfrow=c(3, 2))
> plot(fit1)
S-Plus can return the predicted values, standard errors, and condence
intervals using such functions as predict and pointwise.
A variables polynomial eects are obtained with the poly function. To
include linear, quadratic, and cubic terms of X in a model, use poly(X,
3), but beware that the resulting parameter estimates are in terms of orthogonal polynomial contrasts and are not on the same scale as X. To see
the parameter estimates on the original scale, use
> poly.transform(poly(X, 3), coef(fit))
Indicator Variables
S-Plus can automatically generate indicator variables using the I function.
Entering the term I(age > 70) into your model is a way of looking at the
eect of young versus old rather than the eect of age on a continuous
scale.
Factors
Factors are related to the issue of indicator variables. You may often have
variables such as political aliation or treatment that can assume
one of several values. Giving numeric codes to these types of variables
and entering them into your model as continuous variables makes little
sense. To see the problem here, consider the following example. Suppose a
questionnaire had been done collecting information on political aliation,
8.4. Regression
261
8.4.2
ANOVA
262
8. Statistical Modeling
ANOVA model is a regression model. The dierence comes more with the
presentation of the results, as we see in the following example.
The aov function is used in exactly the same fashion as the lm function
to t the desired model.
> swiss <- data.frame(swiss,educ.ord)
> fit.ord <- aov(Fertilityeduc.ord, data=swiss)
The summary function is used to display the classical ANOVA table
with degrees of freedom, sums of squares, mean squares, F -values, and
corresponding p-values.
> summary(fit.ord)
Df Sum of Sq Mean Sq F Value
Pr(F)
educ.ord 2 1350.891 675.4456 5.100272 0.01018307
Residuals 44 5827.064 132.4333
We see from the large F -value and very low p-value that education is related
to fertility, but we already knew this from the regression model. Is there
any information we can obtain from the ANOVA model that we havent
already seen with the regression model? Of course!
With a factor variable, you can split the degrees of freedom into contrasts
of interest. Take the educ.ord variable from above, which has three levels
and hence two degrees of freedom. We could use one degree of freedom for
a linear test and one for a quadratic test. This breakdown of the degrees of
freedom leads to results that are comparable to using polynomial regression,
as mentioned in Section 8.4.1. The dierence is that with contrasts you
arent restricted to a polynomial transformation, but we show it here for
simplicity. To obtain a polynomial breakdown of the degrees of freedom,
use the split option together with lists:
> summary(fit.ord, split=list(educ.ord=list(L=1,
Df Sum of Sq Mean Sq F Value
educ.ord 2 1350.891 675.446 5.100272
educ.ord: L 1 1173.055 1173.055 8.857705
educ.ord: Q 1
177.836 177.836 1.342838
Residuals 44 5827.064 132.433
Q=2)))
Pr(F)
0.0101831
0.0047286
0.2527848
The linear and quadratic eects of education on fertility are displayed and
tested, showing that it is actually the linear component of the relationship
between education and fertility that is statistically signicant.
The main objective with many ANOVA models is merely to test for
certain eects. Useful additional information includes the relative eect of
each level of the term of interest. These are obtained using the dummy.coef
function:
8.4. Regression
263
> dummy.coef(fit.ord)
$"(Intercept)":
(Intercept)
70.70725
$educ.ord:
4.932745
2.662745 -7.595490
The coecients for educ.ord show the relative eect on fertility of the
education levels of low, average, and high, respectively. Note that fertility
drops as education increases.
The choice between the lm and aov functions is often a matter of preference. If you are used to using one of the two, you naturally continue to
do so.
8.4.3
Logistic Regression
So far, we have examined models for tting data with a continuous outcome.
Often though, the outcome is binary in nature, representing success/failure,
0/1, or presence/absence. In such cases, a logistic regression model is used
to t the data. Such models are t in S-Plus using the glm (or gam)
function. This function is a very general procedure, as it can be used to t
models in the exponential family, including the normal, binomial, Poisson,
and gamma distributions.
To explore this type of regression, we will use the built-in data frame,
kyphosis, which contains a suitable variable with a binary outcome. The
data frame contains the four variables described in Table 8.4.
Table 8.4. Kyphosis data frame.
Variable
Description
Kyphosis
Age
Number
Start
We want to see what relationship, if any, exists between the age, number, and start, and the subsequent presence of kyphosis. For example, are
younger children more susceptible to kyphosis or more resilient?
We use the glm function to t our logistic model. All we need to specify
is the model equation, the exponential family type, and the data frame as
follows:
> fit.kyph <- glm(KyphosisAge+Number+Start,
family=binomial, data=kyphosis)
264
8. Statistical Modeling
The results from the logistic model are viewed in the same manner as
earlier using the summary function, although the printed output is slightly
dierent.
> summary(fit.kyph)
Call: glm(formula = Kyphosis Age + Number + Start,
family = binomial, data = kyphosis)
Deviance Residuals:
Min
1Q
Median
3Q
Max
-2.312363 -0.5484308 -0.3631876 -0.1658653 2.16133
Coefficients:
Value Std. Error
t value
(Intercept) -2.03693225 1.44918287 -1.405573
Age 0.01093048 0.00644419 1.696175
Number 0.41060098 0.22478659 1.826626
Start -0.20651000 0.06768504 -3.051043
(Dispersion Parameter for Binomial family taken to
be 1)
Null Deviance: 83.23447 on 80 degrees of freedom
Residual Deviance: 61.37993 on 77 degrees of freedom
Number of Fisher Scoring Iterations: 5
Correlation of Coefficients:
(Intercept)
Age
Age -0.4633715
Number -0.8480574
0.2321004
Start -0.3784028 -0.2849547
Number
0.1107516
The function call is repeated at the top, followed by some summary information on the residuals, the model coecients, general tting information,
and the correlations between the model coecients. The t-statistics given
are partial t-tests, meaning that all other factors in the model have been
adjusted for. Notice that p-values are not provided with the t-statistics,
but this is an easy calculation to do in S-Plus. You learned how to do it
in the previous chapter.
The t-statistics indicate that age and the number of vertebrae do not
seem to be related to kyphosis but that the starting vertebra of the surgery
is related to it. We know that the degrees of freedom to be used for the ttests are the same as for the residual deviance (77). In this case, the number
of degrees of freedom is large, so the cuto for the t-distribution is similar
to that of the normal (i.e., 1.96 for the usual signicance level of 0.05).
8.4. Regression
265
Only the t-statistic for the starting vertebra of the surgery has a value that
is more extreme than 1.96, implying that it is associated with kyphosis.
To be a little more precise and a lot more helpful, we have calculated both
the cuto level of this distribution and the p-value.
> qt(.975, 77)
1.991254
> qt(.025, 77)
-1.991254
> 2*pt(-3.05, 77)
0.003137395
> 2*(1-pt(abs(-3.05), 77))
0.003137395
Using an overall level of signicance of 0.05 (each tail gets 0.025), we have
calculated both the upper and lower cutos for the t-distribution with 77
degrees of freedom and the p-value using two slightly dierent methods.
Note that this type of test is a form of Wald test and is asymptotically
t-distributed (Venables and Ripley, 2002).
The modeling techniques covered in the previous subsections can also
be applied to logistic models. You may wish to try some of them now, or
simply wait until the exercises.
8.4.4
266
8. Statistical Modeling
Table 8.5. Leukemia data frame.
Variable
Description
time
status
group
8.4. Regression
267
standard deviations and condence intervals for each of the two treatment
groups. The estimated survival for the nonmaintained group at 30 weeks is
0.29, whereas it is 0.49 at 31 weeks for the maintained group.
Printing the t of the survival analysis directly yields certain summary
statistics for each of the two groups, such as the estimated mean and median
survival times.
> leuk.fit
Call: survfit(formula = Surv(time, status) group,
data = leukemia)
0.95 0.95
n events mean se(mean) median LCL UCL
group=
11
7 52.6
19.83
31
18
NA
Maintained
group=
12
11 22.7
4.18
23
8
NA
Nonmaintained
With this output, we see that the median survival time for the maintained
group is 31 weeks but is only 23 weeks for the nonmaintained group. The
next obvious question is whether or not the dierence in survival between
these two groups is statistically signicant. Using the survdiff function,
we can conduct a statistical test for the dierence between two survival
distributions:
> survdiff(Surv(time, status)group, leukemia)
Group N Observed Expected (O-E)2/E (O-E)2/V
Maintained 11
7
10.69
1.27
3.4
Nonmaintained 12
11
7.31
1.86
3.4
Chisq= 3.4
8.4.5
Endnote
The aim of this book is not to teach statistics to the reader, although much
is explained in the text. The focus is primarily to show how to understand
and use S-Plus, independent of ones knowledge of statistics or computing.
If your interest in using S-Plus to perform statistics has been sparked and
you want to learn more, we refer you to other sources. In particular, you
may want to look at Venables and Ripley (2002), Pinheiro and Bates (2000),
Therneau and Grambsch (2000), or Harrell (2001).
268
8. Statistical Modeling
8.5 Exercises
Exercise 8.1
In the regression model presented in this chapter, we assumed that
education was a continuous variable. Often, however, we encounter variables that have discrete levels and must perform a regression model with
them. Create four education groupings of roughly equal size and an indicator variable for each grouping. The rst indicator variable should be
equal to 1 for each observation in the rst grouping and 0 for all other
observations. The other indicator variables are similarly dened. Perform
a multiple regression with these new indicator variables in place of the
continuous version of education. What is the mean education level per
grouping? Does the mean fertility dier between the groupings? What additional conclusions can you make from this analysis over the one presented
in the chapter?
(Hint: Use the quantile, cut, and cbind functions.)
Exercise 8.2
This exercise is designed to give you some practice with logistic regression,
model building, and stepwise regression. Using the Kyphosis data set as in
the section on logistic regression in this chapter, perform the following:
Fit a model to the kyphosis data containing only the variable Start
as a predictor.
Use update to add a term for Age.
Use anova to compare the ts of the two models. Which is better?
If the model with both terms is better, use update to add the term
Number; if not, use update to add Number directly to the model with
Start. Compare these models. What do you conclude?
You have now constructed a model using one approach. Compare this
model with that obtained using stepwise regression. Are the two models the
same? Which model do you think is the best? (Hint: Fit a model with
all terms because the stepwise regression procedure (step) removes terms.)
Plot the t of the best-tting model. What can be said from the diagnostic
plots? Does the model t well? Try using the plot.gam function, and then
answer the same question.
8.5. Exercises
269
Exercise 8.3
In this exercise, we will lead you through the steps necessary to analyze
the drug.mult data set. You will rst need to familiarize yourself with the
data set by reading its description in the help system. You will note that
it has two factors: gender and time. The time factor is hidden in the
layout of the data in that there is a response vector at each time point. An
ANOVA model is the typical procedure to use for a continuous outcome
variable and factor variables as the predictor variables. We have listed the
tasks you will need to perform:
Rearrange the data into a new data frame such that there are variables for the response, sex, time, and subject. (Dene sex and subject
as class factor and time as class ordered.)
Perform an ANOVA model with response as the outcome and sex as
the predictor. What do the results tell you about the eect of sex?
Perform an ANOVA model with response as the outcome and time as
the predictor. What do the results tell you about the eect of time?
Perform an ANOVA model with response as the outcome and both
sex and time as predictors. What do the results tell you about the
eects of sex and time when both are in the model (simultaneous
adjustment)?
Perform an ANOVA model with response as the outcome and predictors of sex, time, and the interaction of sex and time. What do the
results tell you about the interaction of sex and time?
The analysis to this point is ne as a rst step, but it is not technically
correct because it has assumed that the response is independent from one
time point to the next. As the same subjects are used at each of the time
points, their responses are surely correlated. Continue the analysis with
some more advanced techniques.
Conrm that the observations are correlated across the time points.
(Hint: The cor and var functions work on matrices.)
Perform a split-plot ANOVA. This type of model uses the aov
function in conjunction with the Error function. What do the results
tell you about the eects of sex and time?
Perform a multivariate analysis of variance (MANOVA) on the data.
What do the results tell you about the eect of sex? (Hint: Use the
manova function on the original data set.) Summarize the results using
the univariate=T option. Does this shed any further light on the
results, or does it confuse things further?
What do all the results actually tell us about the eects of sex and time?
Do all the results agree? What have we forgotten to do in this analysis?
270
8. Statistical Modeling
We have here a lot of results from fancy statistical techniques and, worst
of all, they do not all agree with one another. What we have forgotten is
the simple data summaries and graphics to show us what information is
really in the data. Most statistical models rely on assumptions that may or
may not be met. Simple summary statistics and graphical descriptions of
the data cannot be overemphasized. Furthermore, they should be the rst
step of a data analysis, not something done at the end when the results
from models are confusing. Nonetheless, the analysis for this exercise will
conclude with a few graphical methods that are of tremendous help in
trying to interpret eects in ANOVA models.
Perform summary statistics on the variables (not shown in solution).
Explore the main eects of the sex, time, and subject factors by using
the graphical function plot.design.
Explore the interaction of sex and time by using the graphical
function interaction.plot.
Can you now make more informed conclusions about the eects of time
and sex? Can you see why some people claim that statistics is an art rather
than a science?
For statisticians reading the book, the analysis is still not complete (although the solution is). The MANOVA test is probably too conservative to
use for the nal conclusion. The split-plot results rely on the assumption
that the variancecovariance matrix is spherical. Looking at the variance
covariance matrix (and/or correlation matrix), it seems likely that it is
not spherical. You could make a HuynhFeldt adjustment to the degrees of
freedom in the split-plot test and recompute the p-value, but this is only an
approximation to departures from nonsphericity. The best option is to perform a longitudinal analysis of the data. With this modeling approach, it is
possible to choose a model that is between the generality of the MANOVA
model and the simplicity of the simple ANOVA (or split-plot) model. Longitudinal models can be t with the lme function, lme standing for linear
mixed eects. The statistics involved with longitudinal techniques are far
beyond the scope of this book, but those with the desire may read the documentation (or Pinheiro and Bates, 2000) for the lme function and carry
this analysis one step further.
8.6. Solutions
271
8.6 Solutions
Solution to Exercise 8.1
To make our lives easier, we rst extract the vectors of interest so that we
dont have to deal with matrices.
> education <- swiss.x[, "Education"]
> fertility <- swiss.fertility
Next, we extract the quantiles we need to dene our groupings. The groups
will be dened as (1) minimum 25th quantile, (2) 25th 50th quantile, (3)
50th 75th quantile, and (4) 75th quantile maximum. Since the quantiles
are based on the data (empirical), they might not have exactly the same
number in each group, but it should be very close. The actual quantiles are
shown below and will act as the cuto points for the groupings.
> quants.educ <- quantile(education,
c(0, .25, .50, .75, 1))
> quants.educ
0% 25% 50% 75% 100%
1
6
8
12
53
We will use the cut function to use the cutpoints dened by the quantiles
to create a variable that contains a 1 for the rst group, a 2 for the second
group, and so on. Note that we have also used the include.lowest option
to include the minimum value. Groups are normally dened starting with
any value greater than the lower cutpoint and up to and including the
upper cutpoint. This methods aw is that the minimum value gets left
out.
> cuts.educ <- cut(education, quants.educ,
include.lowest=T)
The tapply function must be used here (instead of apply) because
cuts.educ has been put into a list. We see that the rst group has three
more values in it than the other groups but that the groups are roughly
balanced. We also see that the mean education level increases from 3.79 in
the rst group to 24 in the fourth group.
> tapply(education, cuts.educ, length)
1+ thru 6 6+ thru 8 8+ thru 12
14
11
11
> tapply(education, cuts.educ, mean)
1+ thru 6 6+ thru 8 8+ thru 12
3.785714
7.363636
10.72727
12+ thru 53
11
12+ thru 53
24
To make our indicator variables, we rst make four vectors and ll them
with zeros. We then put ones into the vectors in the appropriate elements.
272
8. Statistical Modeling
> educ1 <- educ2 <- educ3 <educ4 <- rep(0, length(education))
> educ1[cuts.educ==1] <- 1
> educ2[cuts.educ==2] <- 1
> educ3[cuts.educ==3] <- 1
> educ4[cuts.educ==4] <- 1
We use the cbind function to put our indicator variables into a matrix to
be used in the regression.
> newx <- cbind(educ1, educ2, educ3, educ4)
> summary(lm(fertilitynewx))
Error in lm.fit.qr(x, y): computed fit is singular,
rank 4
Dumped
We have run into a problem with our regression model. Fortunately, this
is a typical problem that is easily xed. You have to realize that, unless
specied, linear regression includes an intercept term. The problem is that,
numerically speaking, the columns of our design matrix are not linearly
independent, which results in a singular matrix (intercept = educ1 +
educ2 + educ3 + educ4).
As mentioned, we can solve this problem of overparameterization by
simply removing one of the indicator variables. The choice of which one
to remove wont change the results, only the interpretation.
> newx <- cbind(educ1, educ2, educ3)
> summary(lm(fertilitynewx))
Call: lm(formula = fertility newx)
Residuals:
Min
-25.64
1Q
-6.186
Median
-2.236
3Q
6.382
Max
22.26
Coefficients:
(Intercept)
newxeduc1
newxeduc2
newxeduc3
Value
60.6364
13.6994
16.0818
7.1000
Std. Error
3.3783
4.5145
4.7777
4.7777
t value
17.9486
3.0345
3.3660
1.4861
Pr(>|t|)
0.0000
0.0041
0.0016
0.1446
8.6. Solutions
273
Correlation of Coefficients:
newxeduc1
newxeduc2
newxeduc3
(Intercept)
-0.7483
-0.7071
-0.7071
newxeduc1
newxeduc2
0.5292
0.5292
0.5000
There is no indicator for education level 4, but we can estimate the fertility
rate for that group based on the model above. The estimated coecients
above are for the intercept and the remaining three education levels. Since
we left education level 4 out of the model, we have set its eect, not its
value, to zero. To get the estimated fertility levels, take the coecient
from the intercept and add the education level coecients for estimates
of 60.6+13.7=74.3, 60.6+16.1=76.7, 60.6+7.1=67.7, and 60.6+0.0=60.6,
respectively. The overall test of whether or not education is related to
fertility rate is given by the F -test with 3 and 43 degrees of freedom. Note
that the p-value for this test is well below the standard cuto of 0.05,
indicating that there is a signicant relationship between the two measures.
By construction, the mean education level increases in the four groupings,
but the fertility decreases. Notice that there is little change in mean fertility
level between the rst two groups. This is reected in the fact that the
coecients are nearly the same for the variables newxeduc1 and newxeduc2.
This indicates that the true relationship between education and fertility is
not exactly linear but is well-approximated by a linear model.
274
8. Statistical Modeling
8.6. Solutions
275
Correlation of Coefficients:
(Intercept)
Start
Start -0.5614099
Age -0.5165854 -0.2967662
We have now t a model with terms for both starting vertebra and age
and can see that the former is associated with the outcome of kyphosis
whereas the latter isnt (t-value is less than 2). Sometimes, a term can
appear to have added little to a model, but because of interdependence of
the predictor terms, it is safer to perform a more structured test using the
anova function. Note that we are not going to t an ANOVA model (using
the aov function) but merely compare the t from two models.
> anova(fit.b, test="Chi")
Analysis of Deviance Table
Binomial model
Response: Kyphosis
Terms added sequentially
Df Deviance Resid.
NULL
Start 1 15.16229
Age 1 2.77311
(first to last)
Df Resid. Dev
Pr(Chi)
80
83.23447
79
68.07218 0.00009865
78
65.29907 0.09585957
The output from the anova function shows us that Start is highly significant (signicantly reduced the residual deviance from a null model) but
that the addition of Age is not. The addition of Age has resulted in a decrease in residual deviance of only 2.8 for a loss of 1 degree of freedom.
Hence, at the typical cuto of 0.05, Age does not signicantly add to the
overall model. We choose not to keep Age in the nal model and next
consider Number in the same fashion.
> fit.c <- update(fit.a, ..+Number)
> summary(fit.c)
Call: glm(formula = Kyphosis Start + Number,
family = binomial, data = kyphosis)
Deviance Residuals:
Min
1Q
Median
3Q
Max
-1.903483 -0.5938547 -0.3885682 -0.2490264 2.214024
Coefficients:
Value Std. Error
t value
(Intercept) -1.0288475 1.22345046 -0.8409392
Start -0.1849421 0.06397197 -2.8909871
Number 0.3574336 0.19963697 1.7904177
276
8. Statistical Modeling
8.6. Solutions
277
Df Sum of Sq
RSS
Cp
<none>
70.26871 78.26871
Age 1 2.877010 73.14572 79.14572
Number 1 3.336562 73.60527 79.60527
Start 1 9.308866 79.57757 85.57757
Call:
glm(formula = Kyphosis Age + Number + Start, family
= binomial, data = kyphosis)
Coefficients:
(Intercept)
Age
Number
Start
-2.036932 0.01093048 0.410601 -0.20651
Degrees of Freedom: 81 Total; 77 Residual
Residual Deviance: 61.37993
The stepwise deletion procedure hasnt removed any of the terms. However,
we just saw that a stepwise addition procedure (done manually) doesnt
arrive at the full model. This occurs frequently because of the criteria set
for the automatic stepwise procedure. If they had been ne-tuned, we might
have reached the same nal model.
In any case, we feel more condent in the model that we built manually
(fit.c). We should now look at some diagnostic plots to see if the overall
model ts the data well. A plot of fit.c is not very helpful in answering
our question (see Figure 8.2).
Because the response variable kyphosis is dichotomous and takes only
the values 0 and 1, the standard diagnostic plots that work so well with
Gaussian distributed data dont convey much information. In this setting,
it is much more useful to use one of the special plotting functions, plot.glm
and plot.gam, which have been specially designed to plot more appropriate diagnostic graphs for these types of models. We have chosen to use
plot.gam using the following code.
> par(mfrow=c(1, 2))
> plot.gam(fit.c, resid=T, scale=6)
The option resid species that partial residuals are to be plotted, and the
scale option species the number of units that are to be plotted on the
y-axis (important when trying to keep the same axis for all graphs). The
Kyphosis
0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.2
0.4
0.6
Fitted : Start + Number
0.8
0.0
0.2
0.4
0.6
0.8
Fitted : Start + Number
Pearson Residuals
2 1 0 1 2 3
sqrt(abs(Deviance Residuals))
0.6 0.8 1.0 1.2 1.4
8. Statistical Modeling
Deviance Residuals
2 1 0
1
2
278
1.0
3
2
1
0
1
Predicted : Start + Number
2
1
0
1
2
Quantiles of Standard Normal
resulting graph is shown in Figure 8.3. The partial residuals show that the
linear relationship used with both predictor variables included in the model
is approximately correct.
Another question that could be asked at this point is how well the model
could be used to predict outcomes. With this model, would we have predicted any false positives? False negatives? These questions, and lots of
related ones, are left to the reader to explore.
279
8.6. Solutions
10
Start
15
6
Number
10
280
8. Statistical Modeling
Our ANOVA model with time is run in the same fashion as the model
for sex.
> nd.aov.time <- aov(responsetime, data=newdrug)
> summary(nd.aov.time)
Df Sum of Sq Mean Sq F Value
Pr(F)
time 3
49.1083 16.36944 2.744479 0.07002713
Residuals 20 119.2900 5.96450
We see from this model that the variable time is not signicantly related
to the response at the 0.05 level (the default level of the test).
The two models above have looked at the two predictor variables in
isolation. If they are independent of one another, a joint model will come to
the same conclusions. In practice, however, a joint model will yield slightly
(or sometimes substantially) dierent results than the separate models. We
show the results of the joint model as follows:
> nd.aov.both <- aov(responsesex+time, data=newdrug)
> summary(nd.aov.both)
Df Sum of Sq Mean Sq F Value
Pr(F)
sex 1 60.80167 60.80167 19.75149 0.000278263
time 3 49.10833 16.36944 5.31763 0.007861779
Residuals 19 58.48833 3.07833
Notice that the variable time is highly signicant in the joint model. This
tells us that the mean response diers between males and females, and also
between the time points.
The next question to consider is the following. Is there a signicant interaction between sex and time? One example of an interaction would be
to have a large mean for males and a small mean for females at the early
time points but the reverse at the late time points. In the model below, we
add the interaction term.
> nd.aov.inter <- aov(responsesex+time+sex:time,
data=newdrug)
> summary(nd.aov.inter)
Df Sum of Sq Mean Sq F Value
Pr(F)
sex 1 60.80167 60.80167 22.26827 0.0002317
time 3 49.10833 16.36944 5.99522 0.0061389
sex:time 3 14.80167 4.93389 1.80701 0.1864698
Residuals 16 43.68667 2.73042
We are now reasonably satised that the interaction between time and sex
is not signicant and stick to the conclusion that there are main eects of
both variables.
There is an obvious problem with the analyses we have performed up to
this point that all boils down to correlation. Take any particular subject.
Is his/her response at time 1 independent from his/her response at time 2?
Although it is theoretically possible that these responses are independent, it
8.6. Solutions
281
is probable that they are correlated. The ANOVA model, however, assumes
that all responses are independent from one another.
The design of the study is somewhat typical and can be run in the
ANOVA setting under certain other conditions. If we introduce a random
eect for the subjects, we can make an adjustment for the correlation that
exists. The type of model we run in this case is referred to as a split-plot
ANOVA model and is t in S-Plus with the simple addition of one extra
term:
> nd.split.plot <- aov(responsesex*time+Error(subj),
data=newdrug)
> summary(nd.split.plot)
Error: subj
Df Sum of Sq Mean Sq F Value
Pr(F)
sex 1 60.80167 60.80167 19.32256 0.01173
Residuals 4 12.58667 3.14667
Error: Within
Df Sum of Sq Mean Sq F Value
Pr(F)
time 3 49.10833 16.36944 6.316184 0.0081378
sex:time 3 14.80167 4.93389 1.903751 0.1828514
Residuals 12 31.10000 2.59167
The output is slightly dierent in that it is split into two parts. In the
Error: subj section, the Residuals term refers to the subjects nested
within sex. This term has 4 degrees of freedom, n1 for the females (=2)
and n1 for the males (=2). In the Error: Within section, the Residuals
term stands for the interaction between sex, time, and subject nested within
sex. These are perhaps statistical details that can be ignored.
The main point is that we have new tests for sex, time, and their interaction. Notice that the interaction is still nonsignicant and that sex and
time are both still signicant. Notice that the p-value for time has hardly
changed at all, whereas the p-value for sex, although still signicant, has
increased due to the correction for correlation.
The split-plot ANOVA model assumes that the variancecovariance
matrix across the time points is spherical. In practice, the denition
of sphericity is met with the compound symmetry variance structure
that assumes a common variance along the diagonal and a dierent but
constant covariance in the o-diagonals. This seems to be a fairly restrictive assumption but one that ts the data quite well in many practical
situations.
Without going into a test to see whether or not the sphericity assumption of the split-plot model holds, it is also possible to run a multivariate
analysis of variance (MANOVA) on the data. This type of model is designed for responses that are correlated, be it dierent response variables
(pulse and blood pressure) or, as in our case, responses at dierent time
282
8. Statistical Modeling
points. One drawback is that we will lose the ability to test for an eect of
time. The MANOVA assumes a general covariance structure such that the
assumptions are always met (still must have Gaussian distributed data).
This generality comes with a price called reduced power. Since it can test
against any alternative hypothesis, it does not necessarily have power to
test the particular hypothesis we have. This means that the MANOVA test
might say that the eect of gender is not signicant when it really is. If,
however, this test says that gender is signicant related to the response,
then we can be sure of our result.
> nd.manova <- manova(cbind(Y.1, Y.2, Y.3, Y.4)gender,
data=drug.mult)
> summary(nd.manova)
Df Pillai
approx. F num den P-value
Trace
df
df
gender 1
0.928614 3.25209
4
1
0.391236
Residuals 4
> summary(nd.manova, univariate=T)
Response: Y.1
Df Sum of Sq Mean Sq F Value
Pr(F)
gender 1 7.260000 7.260000 2.93531 0.161819
Residuals 4 9.893333 2.473333
Response: Y.2
Df Sum of Sq Mean Sq F Value
Pr(F)
gender 1 13.20167 13.20167 3.345017 0.1414016
Residuals 4 15.78667 3.94667
Response: Y.3
Df Sum of Sq Mean Sq F Value
Pr(F)
gender 1 4.681667 4.681667 35.55696 0.003971604
Residuals 4 0.526667 0.131667
Response: Y.4
Df Sum of Sq Mean Sq F Value
Pr(F)
gender 1
50.46
50.46 11.54691 0.02732576
Residuals 4
17.48
4.37
The p-value for gender shows that it is not signicantly related to the
response (p=0.39). The univariate ANOVAs show that the gender eect is
signicant at the last two time points. What do we do now? Which of the
analyses do we believe?
This is how many statisticians proceed with data analysis. The point is
that much of the information contained in the data can be seen through
the use of summary statistics and simple graphs. Furthermore, these are
8.6. Solutions
283
precisely the strengths of S-Plus and should have been the starting point
of our analysis. So, we will now start at the beginning.
You have been asked to perform the summary statistics on your own.
There are two types of graphs that are of particular interest to ANOVA
type models, namely plot.design and interaction.plot. We put the
syntax below and show the resulting graphs in Figure 8.4.
mean of response
78 79 80 81 82
S6
M
S4
S5
3
2
0
S3
S2
F
1
S1
sex
Factors
time
84
subj
sex
76
mean of response
78
80
82
M
F
time
284
8. Statistical Modeling
9
Programming
Now that you have learned the elementary commands in S-Plus and many
ways of applying them, it is time to discover its advanced programming
functionalities. This chapter introduces list structures, loops, and writing
functions, covers debugging matters, and gives some more details on how
to use S-Plus as a programming language.
9.1 Lists
A list is typically a very exible structure in which to store data. Its main
usage is in situations where some data belong together by their contents
but the structure is very heterogeneous. Imagine a database of somebodys
oppy disks, each disk with a number, a short text description of the contents, and a list of les it contains. It would not be a good idea to store
this kind of data in a matrix or data frame because the number of les is
dierent for each disk. A single list can contain dierent element types; for
example, a vector, a matrix, and another list, all at the same time.
A list is constructed by calling the function list and enumeration of
the objects to go into it. Here is an example for creating a list with three
elements, each of them being a vector.
> L <- list(1:10, c(T, F), c("Hey", "You"))
This list has three elements: the rst element is a vector containing the
numbers 1 to 10, the second element contains the Boolean values TRUE
286
9. Programming
and FALSE, and the third element contains two strings. Of course, you can
create a list in many dierent ways. Here is how we could have created the
same list using variables:
>
>
>
>
x
y
z
L
<<<<-
1:10
c(T,F)
c("Hey", "You")
list(x, y, z)
9.1. Lists
9.1.1
287
To add a new element to a list, simply put it into a location that is not
already used. The list L, from above, consists of three elements that can be
determined in the usual way by
> length(L)
3
You can add a new element by assigning the new element to an unused
index:
> L[[4]] <- c("my", "new", "element")
You can use any index you like, but using the next free index is preferable,
as the elements in between are assigned the value NULL. If you add element
number 99 to a list that has 3 elements, you automatically create elements
numbered 4 to 98.
A exible command to use would be
> L[[length(L)+1]] <- next element
where next element can be any type of data. Note that the new element
of L is added directly after the last element without explicitly knowing the
length of the list.
If an already existing index is specied, of course the element is
overwritten. You delete an element by setting its value to NULL. The
command
> L[[2]] <- NULL
deletes the second entry of the list so that it no longer appears, not even
with its newly assigned value, NULL.
Note After deleting an element, all remaining elements of the list are
shifted their indices are decreased by 1!
Think about what happens on the following input lines and then try it out.
>
>
>
>
288
9. Programming
9.1.2
9.1. Lists
289
Now, the elements of the list can be addressed using their names. This is
especially useful if you do not want to have to remember the index numbers.
We retrieve the element bool from the list L by entering
> L$bool
T F
If you make an assignment to an element whose name is not already an
element name, another element with the new name is added to the list.
The command
> L$comment <- "we assign a new element"
adds a new element, comment, to the list.
For modifying existing names, the old names can simply be overwritten
using the names function:
> names(L)
# the names of the elements
"number" "bool" "message" "comment"
> names(L)[2] <- "true and false" # new name for element No. 2
> names(L)
# the new names
"number" "true and false" "message" "comment"
To access the list element with the name true and false, put the name in
double quotes (because of the spaces in its name).
> L$"true and false"
Note Using named list elements is a good idea. In this way, the order and
position of the list elements are completely irrelevant and can always be
changed.
Note Named list elements can be addressed in several ways. The following
commands all do the same thing: extracting the element number from
the list L.
> L$number
> L$"number"
> L[["number"]]
Single brackets can be used as well, but the returned result diers slightly.
The expressions above return a vector, whereas the expressions below
return a list of length 1.
> L["number"]
> L[names(L)=="number"]
Extracting a single list element is therefore typically done by using double
brackets or the $ operator. Single brackets are useful if the operation might
return more than one list element.
290
9. Programming
9.1.3
It is often very convenient to keep objects of the same type in a list, for
example, the country data we used earlier. S-Plus provides some useful
functions that enable the application of a function to all elements of the
list, making the task easy and ecient.
lapply
The tool lapply is designed for working on all elements of a list using the
same function. We use the country data again and calculate some statistics
of the list elements:
> countries <- list(GDP=c(208, 1432, 2112, 259),
Pop=c(8, 61, 82, 7),
Area=c(84, 544, 358, 41))
> countries
$GDP:
208 1432 2112 259
$Pop:
8 61 82 7
$Area:
84 544 358 41
Let us now gure out what the combined statistics of those countries are.
> lapply(countries, sum)
$GDP:
4011
$Pop:
158
$Area:
1027
Remember that all standard arithmetic is also a function within S-Plus,
so we can also subtract 1 from each element of the list by entering
> lapply(countries, "-", 1)
You see that all additional parameters passed to lapply are passed unmodied to the function "-". The minus operator "-" is a function like
any other. You could enter 5-2 or alternatively "-"(5, 2). In order to see
its declaration, use get("-").
The index selection functions [ and [[ can be used in the same
way. We extract the rst element of each list component by supplying the
parameter 1 to the function [.
9.1. Lists
291
292
9. Programming
sapply
If we wanted to obtain a vector of sums instead of a list as above, we could
enter
> unlist(lapply(countries, sum))
GDP Pop Area
4011 158 1027
but we could have used sapply instead. The function sapply simplies the
structure of the results if possible. If the result is a list with elements of the
same length, they are composed into a simpler structure. Using sapply to
derive the sums of the list elements gives
> sapply(countries, sum)
GDP Pop Area
4011 158 1027
whereas using sapply to calculate summary statistics of all list elements
gives
> sapply(countries, summary)
GDP
Pop
Area
208.0 7.00 41.00
246.2 7.75 73.25
845.5 34.50 221.00
1003.0 39.50 256.70
1602.0 66.25 404.50
2112.0 82.00 544.00
rapply
If our data structure is a more complicated list, there is another function
that might help: rapply descends down through all levels of a list if we
have a list of lists, and it lets us dene a function to apply only to objects
of a certain class.
One needs to specify the object to process and the function to apply.
Optionally, one can set a restriction to use only objects of certain classes
by specifying classes="vector of class names" and how to treat the result
(the default is unlisting the list).
Here is a simple example. Let us dene a list of four components, the
fourth being a list again.
> x <- list(one=1:5, two=letters[1:4], three=111:123,
four=list(A=1:3, B=LETTERS[5:7]))
We can reverse the order of all elements in the list using rapply.
> rapply(x, rev, how="replace")
$four:
9.1. Lists
293
$four$B:
"G" "F" "E"
$four$A:
3 2 1
$three:
123 122 121 120 119 118 117 116 115 114 113 112 111
$two:
"d" "c" "b" "a"
$one:
5 4 3 2 1
You can see that the elements were reversed on all levels. The list element
$four is now in position 1, and all its components are reversed as well.
Let us calculate the mean of all numeric elements in the list:
> rapply(x, mean, "numeric", how="replace")
Problem in rapply(x, mean, "numeric", how="replace..:
must supply a function of one argument
Use traceback() to see the call stack
S-Plus informs us that the function to apply must have exactly one argument. The function mean has one argument only (plus an option), such
that we have to use a little trick:
> f <- function(x) return(mean(x))
> rapply(x, f, "numeric", how="replace")
$one:
3
$two:
"a" "b" "c" "d"
$three:
117
$four:
$four$A:
2
$four$B:
"E" "F" "G"
294
9. Programming
9.1.4
Unlisting a List
9.1.5
The split function splits up one variable according to another. The result
is a list with as many elements as there are distinct values in the second
variable. For example,
> split(barley$yield, barley$site)
generates a list of length 6, as there are six sites in the Barley data. Each
list component contains the yield for one of the sites. The names of the list
elements correspond to the levels of the site variable.
The split data are now ready to apply a function to the yield data of
each site separately.
295
This function now inherits the full exibility of all other S-Plus functions just because it is handled by the S-Plus interpreter. It can not only
calculate a single cubed value as in
> cube(3)
27
but you can also supply a vector,
> x <- 1:5
> cube(x/2)
0.125 1.000 3.375 8.000 15.625
call it recursively,
> cube(cube(x))
1 512 19683 262144 1953125
or pass a structure such as a matrix.
> x <- matrix(1:4, 2, 2)
> x
1 3
2 4
> cube(x)
1 27
8 64
A function can have more than one argument (or none at all). We will
imitate the systems division function. Yes, the operand "/" in S-Plus is
itself a function, which can be accessed and modied exactly like any other
function. Enter get("/") to see the declaration.
The following function simply divides the rst argument supplied by the
second:
> divide <- function(x,y) { return(x/y) }
Here are some examples of calls to the new function divide:
> divide(7, 4)
1.75
# two numbers as arguments
> a <- 5:10
> b <- 20:25
> divide(b, a)
# two vectors as arguments
4 3.5 3.14286 2.875 2.6666 2.5
> divide(1:10, 3)
# a vector and a number
0.3333 0.6666 1.0000 1.3333 1.6666 2.0000 2.3333
2.6666 3.0000 3.3333
The more general syntax of a function declaration is as follows:
296
9. Programming
The expressions in italics have to be replaced by valid names and expressions. The arguments are just a listing of parameters to be used inside the
function. The expressions can be any valid S-Plus expressions, which are
evaluated as S-Plus executes them, and the output arguments can be any
data or variable known to the function.
Note that variables you want to use in the function do not need to be
declared in advance as in most programming languages. S-Plus does it for
you when it comes across an assignment.
It is important to know that S-Plus searches for a referenced variable,
and if it does not nd it in the functions scope, it searches for it in the
interactive environment. You can access a variable dened interactively
from inside a function, but, for consistency, it is better to pass the variable
as a parameter to the function.
Note Instead of using return at the end, you can simply put a variables
name as the last expression. S-Plus returns the output of the last expression in the function. Having x on its own in the last line of a function
returns the value of x.
In this fashion, a declaration such as
> f <- function(x) x2
is perfectly valid and used by many if not most S-Plus programmers.
Using the function return assures that the return argument is always the
one wanted. It also allows the possibility of jumping out of the body of a
function at any time and returning a value.
Note If you do not want to give back anything at all or do not want it to
be printed, use the invisible command.
The expression return(invisible()) at the end of a function causes
nothing to be returned at all, nor will anything like NULL be printed on
the screen. The expression return(invisible(x)) returns the contents of
the variable x but does not show it on the screen.
9.2.1
297
Documenting Functions
function(x)
{
x2
}
the function f has its documentation. Entering
> help(f)
or
> ?f
would give a description of our function and its arguments together with
the little help text we entered. As it is very easy to document functions in
this fashion, you should make it a custom to add a few lines of comments
to each function declaration.
9.2.2
Scope of Variables
We will now cover the scope of variables declared within functions in more
detail.
To begin, note that all variables declared inside the body of a function
are local and vanish after the function is executed. Their scope (the environment where the variables are known) does not extend beyond the
functions limits. To demonstrate this eect, we use the following example. Pay attention to the value of z before and after execution of the function f.
>
>
>
>
>
z
f
x
y
z
<<<<-
123
function(x, y) { z <- x/y; return(z) }
3
4
123
> f(x, y)
0.75
> z
123
# z remains unmodied
298
9. Programming
Note Making local variables into global ones, such that their names and
values appear in the interactive environment, is not advisable because unintended or uncontrollable side eects can occur, such as overwriting another
variable. The only proper way of handling local variables is to return them
from the function using return.
Vice versa, a function should get everything it needs via its list of parameters. In particular, it should not rely on global variables in the working
directory that are not passed as arguments.
Only in some extreme cases might you want to assign values to the
interactive environment from inside a function.
The function assign is useful for this.
> assign("x", 123, where=1)
assigns the value 123 to the variable x in the interactive environment. The
argument where=1 tells the system to assign to a variable in the rst entry
of the search path, which is typically the interactive environment or the
data directory. The second argument is any S-Plus expression that can be
evaluated, such that
> y <- 123
> assign("x", y, where=1)
does the same as the statement above.
Note The function assign will overwrite already existing variables without conrmation, as its eect is like assigning something to a variable using
the <- operator.
The assign statement can be abbreviated: The operator <<- assigns the
right-hand side of the expression to a variable dened on the left-hand side.
We continue the example from above and write
> x <<- 123
to assign 123 to x in the global interactive environment from within a function. Understand that this method of assigning variables globally should
only be used in dire circumstances.
9.2.3
299
f(1:10, (1:10)2, T)
f(1:10, (1:10)2)
f(1:10)
f(y=(1:10)2)
f(showgraph=T)
f()
Moreover, the parameters can use other parameters in the default settings.
See the next example and note that y is a function of x in the declaration
of y and that error is a function of both x and y.
Example 9.2. A function with cross-referencing default parameters
The parameters of the function declared use other parameters to set default
values.
f <- function(x=1:10, y=x2,
error=(length(x)!=length(y))
{
if(error) return("Lengths of x and y do not match.")
else return(cbind(x, y))
}
Note S-Plus uses the so-called lazy evaluation mechanism. This means
that expressions are evaluated when needed, not when declared. In the
example above, if x is changed before y is used for the rst time, y no longer
takes the squared value of the default x but that of the latest assigned value
of x.
This can cause surprising eects in rare cases. Here is an example. What
do you expect to be returned by f(2), f(2, 5), and f(2, y=3*x)? The
answers are given in the footnote on this page.1
1 f(2) will return 2, as x is multiplied with 2 before y is accessed for the rst time
and the calculation y = x/2 is carried out. f(2, 5) will return 5, as y is set explicitly
and y = x/2 is overridden. f(2, y=3*x) will return an error message that x was not
found unless you have an object x in your current session. It does not refer to the x
that is only known within the scope of the function.
300
9. Programming
9.2.4
In a function declaration, you can dene the argument, ..., in any position
of the parameter list. As a consequence, the function is able to accept any
number of arguments (such as the S-Plus functions boxplot and c). The
arguments can be named, as in f(x=3), or unnamed, as in f(3).
Because anything can be passed, the ... argument has to be evaluated
manually in the body of the function. It must be checked how many
arguments were passed, and if an argument with a specic name is needed,
it must be checked if it was passed. We will look into the details now.
First, it makes a dierence where the ... appears. Here are two simple
examples:
> f1 <- function(x, ...) { some commands }
> f2 <- function(..., x) { some commands }
In the rst case, where x appears before the ..., f1 can be called by
> f1(3)
and x is assigned a value of 3 within the function. But if you call
> f2(3)
the value 3 is a part of ..., and x has no value at all. You need to specify
> f2(x=3)
to obtain the same result (that x gets assigned the value 3 in the function).
Sometimes, as in the boxplot function, it is useful to have the ... in
the beginning, followed by optional arguments for special purposes. On
301
the other hand, it is not useful to put required arguments behind the ...
parameter. The following example shows how ... arguments are treated.
Example 9.3. Usage of ... as a function argument
The following function shows how to treat an unspecied number of parameters using the ... expression. We compare the distributions of the
data passed to the function. To do this, we calculate the minimum and
maximum over all data and set the x-axis to the same boundaries for all
histograms:
f <- function(..., layout=c(3, 3))
{
# convert data into list
L <- list(...)
total.min <- min(unlist(L), na.rm=T)
total.max <- max(unlist(L), na.rm=T)
par(mfrow=layout)
# set up graphics layout
# create the graphs
9.2.5
9.2.6
302
9. Programming
the function is terminated and the error is given back to the user. A less
crude way of telling that a problem occurred is to use the function warning.
The statement
warning("x must be greater than 0")
adds the warning message to the list of warnings. If the settings (how to
react to a warning) were not changed, the list of warnings will be issued
after the function has completed. The execution will be continued.
If you really want to, you can use the function terminate to terminate
the S-Plus session immediately.
9.2.7
If you have ever written a function that creates graphs, you know it is
often desirable to use the variables name supplied as an argument in the
graphs title. For example, calling my.funny.graph(my.favorite.data)
should produce a plot of the data with the title a plot of my.favorite.data.
To do this, you need to store a string containing exactly what was passed
to the function, as in the following example.
Example 9.4. Retrieving call arguments inside a function
The following function, f, prints the argument exactly as it was passed.
Declaring f as
> f <- function(x) { return(deparse(substitute(x))) }
produces the following results when calling f with dierent arguments:
> f(2)
"2"
> f(x=2)
"2"
> f(x)
"x"
> f(sin(pi))
"sin(pi)"
# pass a number
# pass a number assigned to x
# pass a variable
# pass an expression
In this way, you could add a title to a graph containing what was passed
as x:
title(deparse(substitute(x)))
If a function has more than just one parameter, the x in the expression
deparse(substitute(x)) can, of course, be replaced by any other parameter name. Another useful tool is the function sys.call(). It returns the
complete call to the function in the form of a list. The rst element is the
name of the function, the second element is the rst argument, and so forth.
9.3. Iteration
303
9.3 Iteration
An iteration is, in principle, a loop or an repeatedly executed instruction
cycle, with only a few changes in each cycle. In programming languages
that are not matrix- or array-oriented, even simple matrix multiplication
needs three nested loops (over rows, columns, and the indices).
Since S-Plus is matrix-oriented, these operations are much more ecient
and easy to formulate in mathematical terms. This means they are typically
substantially faster than loops and the code is much easier to read and
write. The following guideline is elementary but important to be kept in
mind when writing S-Plus code.
Note Whenever possible, try to avoid loops. Ninety-nine percent of the
time, an operation on matrices or lists is much more elegant as well as
much faster. Try to use vectorized statements and functions such as apply,
lapply, rapply, sapply, tapply, by, split, and others.
Here is an easy example. If we dene a vector x with 100,000 elements
and assign the square of x to y, without using a loop, the following two
lines take a fraction of a second to execute:
> x <- 1:1000000
> y <- x2
On the other hand, using an explicit loop statement (for) over the values
of x and y, respectively, results in the code
# initialization
> x <- y <- 1:1000000
> for(i in 1:length(x)) y[i] <- x[i]2
and does exactly the same as the two lines above. You can see that the rst
expressions are easier to read and, on top of that, the second little program
needs an order of magnitude longer to execute than the rst statement.
We will treat the dierent forms of loops, for, while, and repeat, in
more detail in the following sections.
9.3.1
304
9. Programming
9.3.2
In contrast to the for loop, the while statement is used if the total number
of iteration cycles is not known before the loop starts. For example, if you
want to iterate until a certain criterion is fullled, the while loop is the
appropriate statement to use.
Example 9.7. A simple while loop
Calculate the sum over 1,2,3,..., until the sum is larger than 1000.
n <- 0
sum.so.far <- 0
while(sum.so.far<=1000)
{
n <- n+1
sum.so.far <- sum.so.far+n
}
# iteration counter
# store the added values
9.3. Iteration
305
The summed values are stored in sum.so.far, and the variable n is increased by 1 in each loop. Enter the short program yourself and let it run.
What is the result? How many times did S-Plus walk through the loop?
9.3.3
The third alternative for looping over expressions is to use the repeat statement. The repeat function repeats a set of expressions until a stop criterion
is fullled. In S-Plus, the keyword break is used to tell the surrounding
repeat environment that it is time to stop.
Example 9.8. A simple repeat loop
We rewrite the code used above to demonstrate the repeat loop.
n <- 0
sum.so.far <- 0
repeat
{
n <- n+1
sum.so.far <- sum.so.far+n
if(sum.so.far>1000) break
}
Another keyword that can be used inside repeat loops is next. If S-Plus
comes across the next statement in a repeat loop, it stops executing the
loop statements at that point and reenters the loop from the top.
You might have realized that you could very easily put S-Plus into an
innite loop by simply forgetting to put in an appropriate break statement.
Note The main dierence between a while loop and a repeat loop is that
it is possible not to enter the body of a while loop at all if the condition
is not fullled at the beginning, whereas a repeat loop is always entered
at least once.
Experienced programmers might know that every repeat construction
can also be written as a while construct.
9.3.4
Vectorizing a Loop
In the introduction to loops, you saw that looping is almost always the less
attractive solution and can often be avoided. As S-Plus is vector oriented,
it is typically possible to rewrite a loop into a shorter and more readable
code.
Looping over elements of a vector or a matrix or over rows or columns
of matrices can be avoided most of the time. Functions such as apply
and tapply on matrices and lapply and sapply on lists provide good
306
9. Programming
assistance. We have seen the usage of lapply and sapply in Section 9.1.3
(p. 290) on lists. Here are some examples using apply and other functions.
Calculating the columnwise medians of a matrix x is done by entering
> apply(x, 2, median)
In addition, we can add any parameter that we want to pass to the function
median to the call to apply. The following code can be used when x contains
missing values and we dont want the mean of the corresponding column
to be missing (NA):
> apply(x, 2, median, na.rm=T)
For the special purpose of calculating minima, maxima, ranges, means, variances, and sums of columns or rows of matrices, there are specic functions
that are typically faster. Consider using the functions colMins, rowMins,
colMaxs, rowMaxs, colRanges, rowRanges, colSums, rowMeans, colSums,
rowSums, colVars, and rowVars. They are typically much faster, as they
were written to perform just this specic task.
How can we gure out which columns of a matrix have one or more
missing values? By testing if there is any missing value for each column of
a matrix,
> apply(is.na(x), 2, any)
The function any returns TRUE if any of the logical expressions is true and
FALSE if all the logical expressions are false. Another function, all, returns
TRUE if all logical values are TRUE and FALSE otherwise.
Alternatively, wwe can sum up the logical values returned by is.na(x)
(TRUE decodes to 1 and FALSE to 0) and test if the sum is larger than 0,
indicating one or more NA values.
> colSums(is.na(x))>0
The columns that have only missing values are found by entering
> apply(is.na(x), 2, all)
or
> colSums(is.na(x))==nrow(x)
The latter expression assesses if the number of NA values is equal to the
number of rows of x, indicating only NA values in the particular column.
Those columns are removed from the matrix using
> x[, !apply(is.na(x), 2, all)]
Generating a vector containing the number of values less than zero for each
column of a matrix is done by
> colSums(x<0)
If there were any missing values in the matrix, the summation would give
a missing value (NA) in return. If we want to calculate how many of the
9.3. Iteration
307
nonmissing values are less than 0, we need to add the na.rm=T to the
colSums function:
> colSums(x<0, na.rm=T)
In order to determine the fraction of values less than 0, we can take averages
over the TRUE/FALSE values, excluding NA values:
> colMeans(x<0, na.rm=T)
Going back to the while and repeat loops we have used before, they can be
rewritten without a loop.
Example 9.9. Rewriting the while and repeat loops without explicit looping
As a rst guess, adding 100 elements will denitively be sucient. We use
cumsum (cumulative sum) to add up the values 1 to 100 and check where
we have reached 1000:
> x <- 1:100
> y <- cumsum(x)
> n <- x[y>1000][1]
or
> (1:100)[cumsum(1:100)>1000][1]
9.3.5
Large Loops
There are situations where a loop cannot be avoided. If these loops have
many cycles or if the block of commands is long, S-Plus might need a lot
of memory and maybe even stop execution of the command due to lack of
memory.
This is often the situation where one starts to write the program in C.
It can still be used from within S-Plus, as we will see later.
However, let us point out a function that might turn out to be useful.
The problem with large loops is that S-Plus does not free memory until
completion of an expression. The function For circumvents this problem by
basically splitting up a for loop into a series of top-level expressions. The
syntax is slightly dierent from the for function, but, most important, all
variables used within a For loop must be global variables, not local ones
dened within functions.
Furthermore, all variables dened within the For loop will be global ones
in the local directory, even after the loop completes. Here is an example of
a function f with a simple loop:
f <- function(x)
{
For(i=1:3, { x <- x+1 })
return(x)
}
308
9. Programming
309
to do, but this is very often not what we want them to do. This section is
about searching for sources of errors.
We will deal with four types of errors:
Syntax errors
While the system is reading the denition of a function, it encounters
a problem and stops reading the denition. An example is to close
more brackets than we have opened. We are given an error message,
typically beginning with Syntax Error: ...
Invalid arguments
The function received arguments that do not allow it to correctly
execute the function. Calling a routine to do a calculation on numbers
and passing character variables to it causes such errors.
Execution errors
While executing the function, S-Plus encounters an error such as an
undened operation. One such example is plotting x against y with
x and y having dierent lengths. S-Plus stops the execution and we
need to track where the error source originates. Most of the time,
S-Plus tells us
Error in ...
If the execution was aborted in a statement that is not the origin of
the error, as when we try to plot only missing values, we need to nd
the source with tools referenced in the following item.
Logical errors
The S-Plus system completes the operation without detecting an
error. However, the result returned to us is wrong; for example, the
mean value of a data set being smaller than the smallest value. We
need to track the whole function for a logical error that caused a
programming error.
The above list of problems and errors is, in general, in order of diculty of nding the source of the problem. We will now tackle these errors
systematically, introducing the tools S-Plus has to handle them.
9.4.1
Syntax Errors
Syntax errors are detected most easily. We might have forgotten a closing
bracket or confused the name of a function with something else, and the
system tells us while reading in the declaration that it disagrees with what
it has just read:
Syntax error: Unbalanced parentheses,
expected "}", before ")" at this point:
Such a message is typically followed by the line of code in question, and
having another look at the code should clarify what the problem was.
310
9. Programming
9.4.2
Invalid Arguments
When writing a function, some errors can be checked for automatically. For
example, if you write a function that gets an input parameter x, and x is
not allowed to be negative, you can check for it and exit the function in a
controlled manner by issuing an error message:
f <- function(x)
{
if any(x < 0) stop("x should be at least 0")
...
}
The function stop prints the message inside the brackets and lets the
function exit immediately. warning is a less crude function that issues
a warning message but continues to execute the function. In the example
above, replacing stop with warning will cause the body of the function to
be executed, but a warning message will be printed if x is less than zero.
You can also use return anywhere in a function. If S-Plus encounters
a return statement, it returns the specied value and exits the function,
no matter where you are inside a functions body.
9.4.3
The problem with run-time errors is that they occur while a function is
being executed. If the function calls other functions, it is sometimes dicult
to track down the problem. Determining the line of code where the function
was actually aborted is the topic of this section. In the good old days,
311
9.4.4
Logical Errors
If you are familiar with other programming languages like C, you might
know that these languages typically come with a debugger. The purpose of
a debugger is to trace the state of a program while it is being executed. To
do this, the program is not executed all at once but interrupted in certain
places or executed line by line. S-Plus has these facilities, too.
If we know where a problem occurs but are unsure what the problem is,
we might want to interrupt the program in that specic place and look at
the contents of the variables. There are several ways of doing it. The most
common and possibly easiest approach is to use the browser function.
browser
The browser function called within another function interrupts the execution and lets the user query contents of variables and inspect levels of calls.
312
9. Programming
313
prompt reappears. The problem might also persist in S-Plus for Windows
Version 7.03 .
If you want to manually repair this behavior, create a local copy of the
function browser.default. If you spot a code segment that reads
if(catch) restart(T, T)
replace it with
if(catch) restart(T, T, skip.error.handler=F)
If the code contains this line already, call browser.default() instead of
browser4 .
If you do not experience a problem with browser, just ignore this note.
trace
You dont need to edit your function and add the command browser()
to its body. The function trace allows you to examine a functions
environment on entering and exiting. Enter
> trace(f, browser)
to call browser on entering the function f,
> trace(f, exit=browser)
to enter browser on exiting the function f, and
> trace(f, browser, exit=browser)
to enter browser on entering and exiting f. Replace f by any function name.
The function untrace ends the tracing process. Call untrace(f) to stop
tracing the function f or untrace() to stop all tracing. Before calling
untrace again, take a look at the function you are tracing.
debugger
If you want to run a debugger after your function crashed, tell S-Plus to
dump all current frames on exiting the function. S-Plus will write out all
levels of calls and their variables on exiting. You can then run the debugger.
As dumping out all frames might take some time, depending on the
complexity of the function, the system default when an error occurs is to
dump only the levels of function calls. By default, the settings are
3 This information was kindly provided by Insightful at the time of writing this book,
based on beta releases of S-Plus Version 7.0.
4 Personal observation using S-Plus Version 7.0.0 beta for Linux 2.4.18. It works
despite the fact that browser only calls browser.default.
314
9. Programming
> options(error=expression(dump.calls()))
If we want to go through the levels of function calls and examine the
variables contents, we need to change the setting to
> options(error=expression(dump.frames()))
and run the function again. The dump will then be done (if the function
crashes again), and we can inspect the state of the system by using the
debugger:
> debugger()
If your function works ne, you might want to reset the error action to just
dumping calls.
recover
S-Plus can automatically enter a debugging tool if an error occurs. The
function recover has basically the same functionality as browser. We use
it to examine the state of the system in case of an error.
S-Plus is told to call recover each time an error occurs by setting an
option as follows:
> options(error=expression(if(interactive()) recover()
else dump.calls()))
If S-Plus is not run interactively but in batch mode, recover should not
be called, as it expects interactive input via the keyboard. That is why we
check if S-Plus is in the interactive mode by using interactive.
If an error occurs, you are prompted with a message and asked if you
want to debug the problem:
Problem in x2: Non-numeric first operand
Debug ? ( y|n ):
Enter a y to start the recover function.
Table 9.1 lists the basic commands that are common to browser,
recover, and debugger.
Pick the debugging tool most appropriate for your current problem. Most
likely you will have to experiment a little with the dierent tools and options. Finally, note that you can also debug S-Plus internal routines if
necessary.
Chapter 10 on object-oriented programming has some more details and
an extended example of a debugging session in the exercises.
315
Command
?
where
up (down)
q
object
any S-Plus expression
Eect
Help. Show the names of all variables in the
current evaluation frame1
Display current frame list
Go up (down) in the list of evaluation frames
Quit
Show the contents of object
Execute the expression and keep results or
assignments in the current frame
Control sequence
Eect
\n
\t
\\
\"
\b
\r
\octalcode
New line
Tabulator
Backslash (\)
"
Backspace
Carriage return
Number coding a character in octal format
(not ASCII, see a PostScript manual)
These control characters can be used in any string, and the S-Plus
functions will deal with them accordingly. For example, a graph title going
over two lines can be created by inserting a new line control character
somewhere in the title string and entering title("First\nand second
line"). Note that there is no space around the \n, as this would add an
extra space to the title.
316
9. Programming
Variable 1
114
121
Variable 2
72
78
317
318
9. Programming
9.7 Exercises
Exercise 9.1
Take a list L generated as follows.
> L <- list(u=runif(100), n=rnorm(200), t=rt(300))
Derive a matrix where each of the columns contains the mean, variance,
and number of observations of the list elements. The columns should carry
the name of the list elements. Use of any loop is strictly forbidden.
Exercise 9.2
Write a function, lissajous, that accepts the parameters a and b and plots
the corresponding Lissajous gures. Set meaningful default values for a and
b. For a denition of Lissajous gures, refer to Exercise 5.3, p. 145.
Exercise 9.3
Use the functions outer and persp to plot the following mathematical
functions:
f (x, y) = 1 exp 1/(x2 + y 2 ) ; x, y [5, 5]
f (x, y) = sin(x) cos(y); x, y [/2, /2] and x, y [, 2]
f (x, y) = x sin(y); x, y [0, 16]
Write a function that takes as parameters the function to draw (one of the
above) and the range with a default setting. Figure out how to use the
functions outer and persp by looking at the manual or the online help
pages.
Exercise 9.4
Create a list with names, telephone numbers, and addresses of ve of your
friends. Write a function, tel, that gets passed a name and returns the
appropriate address and telephone number. This function should handle
the case of being passed a name that is not in the list.
9.8. Solutions
319
9.8 Solutions
Solution to Exercise 9.1
Once the list L is generated using
> L <- list(u=runif(100), n=rnorm(200), t=rt(300))
we can calculate the mean of the list elements by using
> sapply(L, mean)
If we wanted to, we could generate the matrix containing mean, variance,
and number of observations of each list element with a single command:
> x <- rbind(sapply(L, mean), sapply(L, var),
sapply(L, length))
Naming the rows and columns is done in another line:
> dimnames(x) <- list(c("mean", "var", "obs"), names(L))
320
9. Programming
We can do all the gures in this fashion, but a more elegant way would
be to write a general function that receives the function to draw, the
boundaries of the axes, and, optionally, the number of points on both axes.
function.draw <- function(f, low=-1, hi=1, n=30)
{
x <- seq(low, hi, length=n)
z <- outer(x, x, f)
persp(z)
}
9.8. Solutions
321
322
9. Programming
return(invisible())
}
Look at how the comparison is done. The result of the comparison is stored
in a variable which is used later to extract the data from the database. Here
are some applications of the function:
> telephone("Tom")
Name: Tom
Telephone: 008
Address: Seattle
> telephone("Fred")
Name not found.
If you would like to try out more, try to handle the cases where there are
two friends of the same name found in the database, and also, extend the
function such that a part of the name to search for is sucient to nd the
corresponding entry.
10
Object-Oriented Programming
324
325
Let us create a few objects and check what their classes are.
> m <- matrix(1:6,2,3)
> class(m)
"matrix"
> class(1)
"integer"
> class(1:3)
"integer"
> class(1/2)
"numeric"
> class(plot)
"function"
> class(class)
"function"
> class("class")
"character"
Classes like matrix, integer, numeric, function, etc. are basic classes.
We can build on them to generate more complex classes.
326
Using the function new to generate an object and ll its slots is somewhat
uncontrolled. There are no specic checks to see if all the data are supplied
or if the data are invalid. To allow the creation of country objects with only
some data available and to perform basic error checks during creation, we
dene a function country:
country <- function(Name="unknown country", GDP=NA,
Pop=NA, Area=NA, Inflation=NA, EU=NA)
{
# type checks
if(length(GDP)!=1)
stop("GDP must be a single value.")
if(length(Pop)!=1)
stop("Pop must be a single value.")
if(length(Area)!=1)
stop("Area must be a single value.")
if(length(Inflation)!=1)
stop("Inflation must be a single value.")
if(length(EU)!=1)
stop("EU must be a single value.")
327
328
Slot "Inflation":
[1] 2.4
Slot "EU":
[1] T
This is not really a satisfying display. We want to show the data in a
suitable text format when the name of the object is entered, which we will
do in the next section.
329
The function show is now dened for objects of class country. We can look
at the data we have declared earlier:
> Austria
Name
Austria
GDP
208
Pop
8
Area
84
Inflation
2.4
EU
T
The display above is much more useful than the default method.
What have we done so far? We have looked at how the representation
of a new object class is declared. In order to do some error checking and
handle missing data by setting defaults, we have declared a function to
generate objects of the new class. We have generated objects of the new
class, and we have written a function that extends the generic function
show to display objects of the new class.
These are the very basic elements of object-oriented programming in
S-Plus. To illustrate further applications, we continue to extend existing
generic functions to be class-specic.
Suppose we want to dene the smaller comparison for countries. You
can easily verify that the smaller operator (<) cannot be applied to country
objects. We say that a country is larger than another if its area is larger.
The function setMethod can be used as earlier. The generic function we
want to extend is <. We want to extend it to be applicable to two objects
of class country:
setMethod("<", signature(e1="country", e2="country"),
function(e1, e2)
{
return(slot(e1, "Area") < slot(e2, "Area"))
}
)
The signature is the identication of the method via its arguments: we
dene a less than operator for two objects of class country.
From each of the two objects, called e1 and e2, the slot "Area" is
extracted using the function slot. The "Area" slot is of a generic class, numeric, for which the comparison less than is dened. We use the function
< to compare the two areas.
Let us now see if the comparison works.
> Austria < Switzerland
F
S-Plus tells us that Austria is not smaller than Switzerland. The counterpart of the smaller comparison, larger, is still undened for objects of
class country:
330
par(mfrow=c(2,2))
barplot(x@GDP, names=x@Name, main="GDP")
barplot(x@Pop, names=x@Name, main="Pop")
barplot(x@Area, names=x@Name, main="Area")
barplot(x@Inflation, names=x@Name, main="Inflation")
}
)
Note that slots can be extracted using either the slot function or the @
operator.
Trying out the plot function gives a graph as displayed in Figure 10.1:
> plot(Austria)
So far, we have only dened the plot function for a single object of class
country. All bars go up to the top of the graph boundaries. It would be
much more useful if we had bar charts of several countries data. If you try
calling plot with two objects of class country, you notice that only the rst
country object is graphed. The pattern of passing two objects to the plot
function matches the pattern that we have specied for graphing a single
country.
Let us dene a plot function for two countries. We tell S-Plus where the
function is applicable by the signature of the objects passed, c("country",
"country").
Pop
GDP
331
Austria
Area
Inflation
20 40 60 80
Austria
Austria
Austria
Figure 10.1. The country data graph for a single country, Austria.
332
Pop
0 50 100150200250
GDP
Austria
Switzerland
Austria
Inflation
20 40 60 80
Area
Switzerland
Austria
Switzerland
Austria
Switzerland
Figure 10.2. The country data graph for two countries, Austria and Switzerland.
However, if we try to display more than two countries, our function fails.
We will have a look at this problem in the exercises.
To display which classes have a plot function, we use showMethods:
> showMethods("plot")
10.3. Debugging
Database
"main"
"main"
"splus"
"splus"
".Data"
".Data"
x
"ANY"
"ANY"
"timeSeries"
"signalSeries"
"country"
"country"
333
y
"ANY"
"missing"
"ANY"
"ANY"
"ANY"
"country"
The last two lines show that we have dened two plot methods: one for
two countries and the other for a country and any other class. They are
stored in the local .Data directory.
10.3 Debugging
If something goes wrong, we need to track down the problem. The error
message gives a good hint as to where the problem occurred. Calling
> traceback()
displays the call stack, the sequence of functions called when the error
occured. We can tell S-Plus to call the function recover on any error,
putting us into an interactive debugging environment:
> options(error=expression(if(interactive()) recover()
else dump.calls()))
We can also enter the browser on entering or exiting specic functions. The
function trace lets us dene when to call the browser function.
> trace(show, browser)
calls browser each time the function show is entered.
> trace(show, browser, exit=browser)
calls browser on entering and exiting the function show.
> trace(show, exit=browser)
calls browser on exiting the function show. To stop tracing of all functions,
the function untrace can be used:
> untrace()
If there are too many calls to the function we want to trace, we can only
trace it for calls with objects of certain classes such as country. The function
traceMethod is called as follows to trace only calls to show with objects of
class country:
> traceMethod("show", "country", browser)
The arguments are the same as for trace.
334
10.4 Help
Besides the standard online help,
> help("helptopic")
there are some ways to query object-oriented help pages. To query the
denition of a class for example, matrix enter
> class ? matrix
If we are interested in the methods provided, we can ask for the generic
function and its associated library of class-specic functions.
> method ? show
displays the list of class-specic show functions. We can also ask directly
for the show function for objects of class country by entering
> method ? show("country")
10.6. Exercises
335
10.6 Exercises
Exercise 10.1
Write a plot function for country data that takes any number of countries
and plots a bar for each. It should check if all supplied data are of class
country.
Exercise 10.2
We can hypothetically generate a new country by joining two countries. To
obtain the data for the new country, we dene a function "+" as follows:
setMethod("+", c("country", "country"),
function(x, y, ...)
{
newname <- paste(x@Name, "+", y@Name, sep="")
combined <- country(newname) # new object
for (what in slotNames("country"))
slot(combined, what) <- slot(x, what)+slot(y, what)
}
)
Calling the function with two objects of class country generates an error.
Use the S-Plus debugging tools to trace down the problem(s) and correct
them.
Exercise 10.3
List all objects of class country in your current working directory and plot
them.
Exercise 10.4
Take the data from Exercise 9.4 on page 318: name, address, and phone
number of some friends of yours. Generate a class person. Dene a
method to show the objects and a search method to locate the persons
data.
336
10.7 Solutions
Solution to Exercise 10.1
In order to plot more than two countries, we dene a new plot function
for two countries, where any number of country objects can be passed in
addition.
Pop
20
40
60
80
GDP
Inflation
0 100
300
500
Area
10.7. Solutions
337
return(invisible())
}
)
Applying the function to the four countries dened earlier gives Figure
10.3.
> plot(Germany, France, Austria, Switzerland)
You can see that there is a problem with overlapping labels. This can be
overcome in many ways; for example, by using shorter country names or
by inserting a newline character ("\n") before every other label.
338
the same, such that we have to remove the ... argument and rename x to
e1 and y to e2.
setMethod("+", c("country", "country"),
function(e1, e2)
{
newname <- paste(e1@Name, "+", e2@Name, sep="")
combined <- country(newname)
for (what in slotNames("country"))
slot(combined, what) <- slot(e1, what) +
slot(e2, what)
}
)
S-Plus accepts the declaration without complaint now. Let us try to add
two countries:
> Austria + Switzerland
Problem in slot(e1, what) + slot(e2, what):
Non-numeric first operand
Use traceback() to see the call stack
S-Plus encounters an error on executing our statement. A rst operand
was non-numeric. It proposes to use traceback, so thats what we do next:
> traceback()
5: eval(action, sys.parent())
4: doErrorAction("Problem in slot(e1, what) +
slot(e2, what): Non-numeric first operand",
3: slot(e1, what) + slot(e2, what)
2: Austria + Switzerland
1: Message: Problem in slot(e1, what) +
slot(e2, what): Non-numeric first operand
We see where the problem occurred. The problem was in the evaluation of slot(e1, what) + slot(e2, what). S-Plus entered the function
doErrorAction and, in turn, eval.
We suspect that we are adding up two strings, the names of the countries.
We can verify our suspicion by entering the browser function on entering
the function +.
> trace("+", browser)
We execute the + statement again:
> Austria + Switzerland
b(+)>
S-Plus stopped the execution of the expression Austria+Switzerland
when it encountered the rst call to the generic function +. We enter a ?
to obtain help.
10.7. Solutions
339
b(+)> ?
Type any expression. Special commands:
up, down for navigation between frames.
c
# exit from the browser & continue
stop # stop this whole task
where # where are we in the function calls?
Browsing in frame of Austria + Switzerland
Local Variables:
We can check where we are in the levels of calls:
b(+)> where
*2: Austria + Switzerland from 1
1: from 1
We are obviously at the very beginning. S-Plus is evaluating the expression
Austria+Switzerland. We want to inspect the addition of two slots, so we
tell S-Plus to continue the evaluation.
b(+)> c
On entry: Called from: slot(e1, what) + slot(e2, what)
Now, we are at the point where S-Plus entered the loop with the variable
what as the loop variable. Let us see where we are and what the arguments
are:
b(+)> where
*3: slot(e1, what) + slot(e2, .... from 2
2: Austria + Switzerland from 1
1: from 1
b(+)> e1
"Austria"
b(+)> e2
"Switzerland"
We are currently evaluating the expression slot(e1, what)+slot(e2,
what). The arguments are "Austria" and "Switzerland". Note that the
two arguments are strings and not the variables of the same name. We are
obviously adding "Austria" and "Switzerland".
The variable what should therefore contain "Name", the slot label.
b(+)> what
Problem in slot(e1, what) + slot(e2, what):
Object "what" not found
b(+)> ?
Type any expression. Special commands:
up, down for navigation between frames.
c
# exit from the browser & continue
stop # stop this whole task
where # where are we in the function calls?
340
EU
F
The variables e1 and e2 look dierent here. They are the two objects
Austria and Switzerland. We have veried that the variable what is currently set to "Name". Let us go down in the levels of frames to where we
were and do the addition:
b(+)> down
Browsing in frame of slot(e1, what) + slot(e2, ....
Local Variables: e1, e2
b(+)> e1
"Austria"
b(+)> e1 + e2
Problem in e1 + e2: Non-numeric first operand
There we are. We have found the problem. We can now terminate the
evaluation or continue. We decide for the latter.
b(+)> c
Problem in slot(e1, what) + slot(e2, what):
Non-numeric first operand
Use traceback() to see the call stack
S-Plus continues the evaluation where it stopped and encounters the error
again.
We have to modify our function + to add two objects of class country.
We decide to exclude the Name slot from the addition and leave the rest
untouched:
10.7. Solutions
341
342
> France+Germany+Switzerland
Name GDP Pop Area Inflation EU
France+Germany+Switzerland 3803 150 943
5.3 T
A good addition of two countries would be a weighted ination (weighted
by GDP). The EU membership of our virtual country is unclear if not all
the countries are members or not members.
We decide to add up all slots that dont have any of the names Name,
Inflation, or EU. The function match can determine the indices of those
slots for us. Here is an example:
> match(c("Name", "Inflation", "EU"),
slotNames("country"))
1 5 6
We use the match function inside our "+" function to let it handle the
object slots correctly, even if the order or place of the variables changes.
setMethod("+", c("country", "country"),
function(e1, e2)
{
newname <- paste(e1@Name, "+", e2@Name, sep="")
combined <- country(newname)
add.up <- slotNames("country")
omit
<- match(c("Name", "Inflation", "EU"),
slotNames("country"))
add.up <- add.up[-omit]
for (what in add.up)
slot(combined, what) <slot(e1, what) + slot(e2, what)
# special cases
10.7. Solutions
343
"Switzerland"
One way of calling the plot function is to build up the call in a string and
then let S-Plus evaluate the string as an expression.
> y <- paste(x, collapse=", ")
> y
"Austria, France, Germany, Switzerland"
> z <- paste("plot(", y, ")")
> z
"plot( Austria, France, Germany, Switzerland )"
We have our command stored in a string. We have to let S-Plus evaluate
the string as an expression now. The function parse parses the string into
an expression that is still unevaluated. Passing this expression to the eval
function evaluates and executes the expression.
> eval(parse(text=z))
The above is equivalent to entering the command
> plot( Austria, France, Germany, Switzerland )
on the command line.
344
10.7. Solutions
345
This looks much better. Let us now nd interactively how we could get a
hold of all objects of class person and scan them for a specic name in the
Name slot. Detecting all objects of class person is easily done by using
the objects function:
> x <- objects(class="person")
> x
"Bert" "Paul" "Peter" "Tom"
We have obtained a vector of strings with the names of the objects. As we
want to avoid loops, we decide to convert the vector into a list and work
with list operations.
> x <- as.list(x)
> x
[[1]]:
[1] "Bert"
[[2]]:
[1] "Paul"
[[3]]:
[1] "Peter"
[[4]]:
[1] "Tom"
So far, we have only the names of the objects in which we are interested.
The next step is to retrieve the data for the objects. We can use the function
get, which takes a string and returns the object of that name. We apply
the function get to each list element by using lapply:
> x <- lapply(x, get)
> x
[[1]]:
Data for Bert Ernie
Address: Berlin
Phone : 009
[[2]]:
Data for Paul Simon
Address: Boston
Phone : 007
[[3]]:
Data for Peter Fonda
Address: Zurich
Phone : 010
346
[[4]]:
Data for Tom Hanks
Address: Seattle
Phone : 008
Now that we have all objects of class person stored in a list, we extract
the Name slot of each element. If we use sapply instead of lapply, the
result is a vector instead of a list. We store the names in a dierent variable,
as later we will want to display the entire object matching a given name.
> y <- sapply(x, slot, "Name")
> y
"Bert Ernie" "Paul Simon" "Peter Fonda" "Tom Hanks"
We are almost done. Let us compare the names we have with any given
name and return the corresponding list element in x.
> x[[y=="Peter Fonda"]]
Data for Peter Fonda
Address: Zurich
Phone : 010
The comparison y=="Peter Fonda" gives a vector of TRUE/FALSE elements,
and the list element with a TRUE in the comparison operation is selected.
It might be that we have two friends of the same name. In that case, our
statement would fail. Remember that more than one list element is selected
by using single brackets. So, replacing the statement above with
> x[y=="Peter Fonda"]
Data for Peter Fonda
Address: Zurich
Phone : 010
would also work if we knew more than one Peter Fonda. To check what
happens if two Peter Fondas were on our database, enter
> z <- Peter
and retry the statements above.
We have achieved everything we wanted to do, so we can nally compose
our function to search for a person of a given name.
FindPerson <# Find objects of class person with the slot Name as
# specified in the Name parameter
# Name: string
function(Name)
{
People <- as.list(objects(class="person"))
if(length(People)==0) stop("Not a single person found.")
10.7. Solutions
347
11
Input and Output
It is every users intention to analyze his or her own data. The data have
to be read into the system before they can be analyzed. This chapter discusses in detail the dierent ways of reading and writing data. Writing
S-Plus output and transferring data les are other important input/output
functions covered.
If your version of S-Plus contains an import/export facility, this facility
is the easiest way to import and export data sets. Importing data in this
fashion was described in Section 2.3.1.
As a quick overview, you should be aware of the following commands.
To read S-Plus commands from an external le, use the source function.
To read data from a text (ASCII) le, use the importData function or
its alternative, the read.table function. To transfer data to and from
dierent S-Plus systems, use the data.dump and data.restore functions.
To record a session in a text le, use either the sink or record function.
Each of these topics and functions is discussed in this chapter.
350
351
Table 11.1. File types and extensions with the import/export facility in UNIX.
DBASE (.dbf)
FASCII (.x, .fsc)
HTML (.htm*)
MATLAB (.mat)
ORACLE
SAS (.sd2)
SAS TPT (.tpt, .xpt)
SPSS (.sav)
STATA (.dta)
Exporting data is done in much the same fashion, with the syntax being
of the form
> exportData(data, "le", type="letype")
where data refers to the data set to be exported, le species the name of
the output data set to be created, and type species the type of le being
created. As with importData, if the appropriate extension is given in the
le name, type need not be specied. Most of the options available with
importData are the same ones available with exportData.
352
Note There are very few strange cases where S-Plus refuses to import or
export ASCII data with the expected format. In such cases, you have to
give up the convenience of the import/export facilities presented here and
use the basic method explained in Section 11.4. Heres a quick example.
Use any editor to create an ASCII le containing two lines of data: test1
12A23, and test2 43E12. Save the le and try reading it into S-Plus using
any of the import methods described so far. What do you get and why
do you get it? If you are familiar with scientic notation, you might guess
what S-Plus has assumed.
This point brings us to another. We cant emphasize enough how important it is to check the data that you have just imported or exported.
353
354
1
4
7
2
5
8
3
6
9
The command
> x <- scan("data.dat")
reads in a vector with nine elements. We can store it as a matrix by
specifying
> data <- matrix(x, ncol=3, byrow=T)
to read three columns rowwise.
Note If your data le is not in the default directory, you must give the
full path name of its location. Be aware that, with Windows, you have to
use double backslashes instead of single ones to specify path names. For
example, a le in the default directory might be referenced by
> scan("c:\\spluswin\\home\\data.dat")
The forward slash used to specify path names in UNIX can also be used to
specify a Windows path, as in
> scan("c:/spluswin/home/data.dat")
355
have to use the as.numeric function to test for and then to convert the
original character input into numeric data.
356
# create x
# dump x into xdump.dat
Using any editor to examine the contents of the le xdump.dat shows that
it contains
"x" <- c(1, 2, 3, 4, 5)
This is especially useful for transferring S-Plus data from one hardware
platform to another. If you want to dump all variables at once, use
> data.dump(objects(), "dump.all")
Note The functions dump and restore still exist in S-Plusbut are recommended only for the case when it is absolutely necessary to look at the
contents of a dump le using an editor. If that is not the case, the new
generation functions data.dump and data.restore should be quicker and
more ecient.
Note To transfer data to an older version of S-Plus, you may have to use
the oldStyle=T option. Check the help entry for detailed information on
aected versions.
357
> sink()
is used to close the le and direct the output back to the screen.
The record function can be used in the same way as the sink function.
The dierence between the two functions is that sink saves only the output,
whereas record saves both the commands and their output.
358
11.9 Exercises
Exercise 11.1
In this exercise, we will practice reading and writing one of the S-Plus
data sets. The object state.name contains the names of the 50 states in
the United States, and state.x77 contains various attributes about the
states.
(a) Use the exportData function to write the populations and incomes
to a le named mystates.dat.
(b) Read the data back into S-Plus, from the mystates.dat le, using
the importData function.
(c) What happened to the row (state) names? Repeat parts a and b using
appropriate options such that the state names are preserved.
(d) Repeat parts a and b using a le type other than ASCII (we will
use Excel) such that the state names are preserved. Are the options
dierent from those used in part c?
Exercise 11.2
We want to track an investment with a xed interest rate over some years.
Develop a function that takes the parameters initial capital, interest rate,
and time in years, and create a graph that shows the amount of capital over
the years. The capital stock changes every year, as we leave the interest
obtained on the account.
The trick is that the function should ask the user to supply any of the
parameters that have not been supplied to it. The parameters are the initial
capital investment, the interest rate, the length of time involved, and the
rst year of the investment. Be sure to cover the possibility that the user
has entered the interest rate as a percentage rather than as a fraction. This
is going to require some extra work on your part, as we havent covered
some of these functions in the chapter.
Hint 1: Use the help facility to nd out about the readline, parse, eval,
and missing functions.
Hint 2: The formula to calculate the capital after n years, with an initial
capital x and an interest rate r, is given by x (1 + r)n . For example, we
invest a capital of $10 at an interest rate of 10% (= 0.1). After 1 year, the
capital becomes $10 plus $1 in interest. After 2 years, we obtain $11 plus
$1.10 interest, such that the new capital stock is $12.10.
Using the formula, we get $10*(1+0.1)2 = $10*1.21=$12.10.
11.10. Solutions
359
11.10 Solutions
Solution to Exercise 11.1
To export the state.x77 data using ASCII format, we begin by using the
exportData function. The only twist is that we want only the rst two
columns of data. Hence, we have to look in the help system at the detailed
syntax of the function and nd the keep option. The le can then be created
using the following command:
> exportData(state.x77, "c:/mystate.dat", "ASCII",
keep=c("Population", "Income"))
Another point to remember is that the name of the data set to be exported
should not be contained in double quotes. You can open the le using any
editor and look at it for yourself. In particular, take note of what S-Plus
uses as a delimiter.
Reading the data back into S-Plus is done via the importData function
with the following command:
> test.ascii <- importData("c:/mystate.dat", "ASCII")
The syntax is the same to import the data as to export it, except that the
columns dont have to be specied (we take what we can get). Note that
the state names dont appear. The rst several rows are as follows:
Population
3615
365
2212
2110
21198
...
Income
3624
6315
4530
3378
5114
...
360
Population Income
Alabama
3615
3624
Alaska
365
6315
Arizona
2212
4530
Arkansas
2110
3378
California
21198
5114
...
...
...
We now have the explanations we need to work with our data.
The only change we have to make to write the data into an Excel le
instead of an ASCII le is to change the extension of the le name and to
change the le type to EXCEL. Our export command now becomes
> exportData(state.x77, "c:/mystate.xls", "EXCEL",
keep=c("Population", "Income"), rowNames=T)
To import the Excel le back into S-Plus, we use
> test.xls <- importData("c:/mystate.xls", "EXCEL")
Not surprisingly, the output looks exactly the same as that from above.
Note We didnt really need to specify ASCII or Excel above because
they are the default le types for the respective le name extensions that
we used.
11.10. Solutions
361
362
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
100
120
140
160
180
200
Figure 11.1. The development of a capital of $100 over 15 years, with an annual
interest rate of 5%.
12
Tips and Tricks
As we are getting towards the end of the book, we look at specic details
you will probably come across after having worked with S-Plus for a while.
If you have just recently started to work with S-Plus, you might want to
take a look at what is covered in this section and come back to it later
after having gained some experience such that you can judge what is useful
to you.
We will have a look at practical hints such as how to keep an overview of
your objects, clean up after a while and how to choose names for objects.
We will look at developing program code and running batch jobs, including
C and Fortran programs, and maintaining libraries. Furthermore we will
cover the special topics of using factors in S-Plus and including graphs in
documents.
12.1.1
After a while, the fact that S-Plus stores all the objects you create in your
data directory until you delete them (even after quitting a session) results
364
in having lots of objects, some of which you dont need anymore. For this
reason, we advise you to clean up the objects from time to time. The easiest
way is to remove the unneeded objects one by one, using the rm command:
> rm(unneeded.object)
Several objects can be removed at once just by listing them, separated by
a comma. If you want to specify a set of objects using wild cards, use the
remove function. The syntax for removing all objects beginning with an x
is for UNIX systems
> remove(objects(pattern="x"))
and for Windows systems
> remove(objects(pattern="x*"))
UNIX systems use regular expressions for matching a pattern to objects:
Windows systems use the Windows pattern ( for any number of characters, ? for a single character). If you enter the Windows-specic command
on a UNIX system, everything is removed!
To remove absolutely all of the objects in the current directory (think
twice before you try this!), enter
> remove(objects())
12.1.2
When writing a function around the creation of a graph le, you normally
want to nish by exiting the function and restoring the state of the graphics
device to the state it was in before the function started. For this purpose,
you can store all graphical settings into a variable to retrieve later. This is
done by using (inside the function, before issuing any graphics command)
the commands
> par.old <- par()
> on.exit(par(par.old))
Note that putting the statement par(par.old) at the end of the function
does almost the same thing except that the status is not restored if your
function does not exit properly (if it crashes, for example). on.exit is
always executed, whenever and however the function is left.
12.1.3
Naming of Objects
Some strategies become useful if you work with S-Plus over a longer period
of time. Objects (variables and functions) belonging together might become
so numerous that their relationship fails to be visible when looking at the
names of all objects in the data directory.
365
Note You typically create a lot of variables that you dont need in the long
term. To identify them, it is a good idea to follow the rule that all functions
created for temporary use start with f or g and temporary variables with
x, y, or z. Using this rule, you can delete all these variables from time to
time to clean up the data directory.
Note Variables and functions belonging together could start with the same
string, have a period (.), and then have the actual identier as the second
part of the name. For example, if you examine your own car data, you
might call the data itself car.data. The function to read the data from the
external le could be named car.read, the function for plotting the data
might be called car.plot, and the vector of means over the columns could
be stored in car.means.
12.1.4
Repeating Commands
366
Changing the prompt and continuation prompt (to ask for completion of
the expression) is easily done by altering the following options:
> options(prompt=";", continue="\007")
If you prefer a space between prompt and command, add a blank after
the semicolon. Making the terminal beep if the statement is incomplete
wont work on Windows machines. Instead, we can change the continuation
prompt to a tabulator:
> options(prompt=";", continue="\t")
If you develop a function, you absolutely must gure out how to set up a
good environment. If you dont want to reenter the function over and over
again, you need to store it, edit it, and make the modications permanent
in S-Plus.
We want to point out in the beginning that the following principles work
for Windows and UNIX. Windows oers an additional way of developing
code, which is presented later.
Note It is a good idea to have only a single function in a le and to give
the le the same name as the function it contains. Keeping this structure
will make it easy to nd code and to determine the contents by its le
name.
12.2.2
367
The code stored in your le can be read into S-Plus as if it were typed
on the command line by using the source command.
> source("lename")
After examining the output, you might want to change the function and
repeat the cycle. Dont forget to save the le before running the source
command.
If you know and use the Emacs editor, you should take a look at ESS
(Emacs Speaks Statistics). ESS allows you to run S-Plus from within
Emacs comfortably and communicate with the S-Plus process, for example
by sending marked blocks of text to it or by retrieving commands and
output easily. The process is very comfortable and powerful but it takes
some eort to learn Emacs. ESS itself comes with some documentation that
gets you started, and it will probably be worth the eort.
Even better, Emacs and ESS are free. The Web reference is given in the
bibliography (pp. 425.).
Windows-Specic Alternatives
If you run S-Plus for Windows, there is a convenient way of editing and running your code. Create a le with an extension .SSC (e.g.,
Splusfun.SSC). The extension .SSC is known to Windows as an S-Plus
script.
If you double-click on the le icon, S-Plus will start up in the current
directory. It will open a Script window consisting of two parts. The upper
part shows the le contents, whereas the lower part shows the S-Plus
output if you run your code. Click on the play button in the toolbar
section to run the entire code (see Figure 12.1).
Figure 12.1. The play button to run S-Plus scripts on Windows systems.
You can only select a part of the code and click on the play button.
Only the selected code will be executed.
Note If your le contains a function declaration and you make your code
known to S-Plus by one of the methods mentioned, you wont see any output. What happens here is exactly what happens if you enter a declaration
on the command line. The new object (function, data, or whatever) is now
known to S-Plus. A function needs to be called to run it.
If you have the Enterprise Developer edition of S-Plus, you might also
take a look at the Workbench based on the Eclipse toolkit. It allows you
368
12.2.3
Data frames are convenient for working with rectangular data sets (matrices) of heterogenous objects, such as numerical and character data.
Matrices or arrays must consist of a single type. This has been illustrated
in Section 4.1.3 (p. 104).
Data frames can be used similarly to matrices: their elements can be
accessed as if the data set was a matrix. The third column of the data frame
barley can be accessed by specifying the column number, as in barley[,
3], and it can be referenced by its column label, barley[, "year"]. One
can access the row and column labels by using dimnames, and the data
set can even be transposed (which converts the data frame into a matrix
though).
Data frames are lists, too (they are stored as lists internally). They have
constraints though: each element must be a vector, and all vectors must
have the same length. All list functions can therefore be applied to data
frames, too!
You might use names to assign new names to the data frame elements, and you can access the third column of the Barley data frame by
barley[[3]], barley$year, or barley["year"].
Treating the data frame as a list will often yield a straightforward solution. For example, if you want to assess if the elements of a data frame are
factors, enter
> sapply(barley, is.factor)
yield variety year site
F
T
T
T
This in turn can be used to apply a function to the factors of a data frame
only (or the numeric values). A function can be applied to each list element
that performs the corresponding action depending on the type (the class)
of the data.
> lapply(barley, function(x)
{ if(is.factor(x)) ... else ... })
Note Data frames are stored internally as lists. It is generally better to
think of data frames as lists than as matrices. For example, using the
function length on a matrix will give the total number of matrix elements
(which equals the number of rows times the number of columns). For a list,
1 Eclipse and the workbench were introduced with S-Plus Version 7.0 and are only
contained in the Enterprise Developer edition.
369
length gives the number of list elements. The latter is also the case for
data frames. Entering length(barley) returns 4. However, one can also
query the dimension of the data set: dim(barley) yields 120 rows and 4
columns.
12.2.4
370
Figure 12.2. A graph sheet with six pages, each named after the vehicle type it
shows
To illustrate that there is almost always not just one way of solving
things, the following example illustrates how to rst create all graphs (all
pages of the graph sheet) and rename them afterwards.
2 You might have noticed that the y-axis shows Percent of Total, but that the values
shown do not sum up to 100. It turns out that S-Plus (in this version at least) includes
NA values in the calculation of percentages. Can you think of how to correct it such that
the values sum up to 100?
371
12.2.5
372
373
Calling C Routines
This section contains a short example of how to perform the dierent steps
of calling a routine in detail. We will write a C function that takes a double
precision parameter and gives back the third power (cube) of it.
S-Plus only exchanges pointers with external functions. For this reason,
the self-written function in C or subroutine in Fortran has to check the
length of the data passed, as well as other possible sources of errors.
To pass the arguments properly, an explicit type conversion in S-Plus
before calling the external function is adequate (e.g., as.double for double
precision objects). Table 12.1 lists the S-Plus modes and their equivalent
types in C and Fortran.
Table 12.1. S-Plus types and C/Fortran equivalents.
S-Plus
Fortran
logical
integer
single
double
complex
character
raw
list
long
long
oat
double
s complex
char
char
s object
LOGICAL
INTEGER
REAL
DOUBLE PRECISION
DOUBLE COMPLEX
CHARACTER()
Example 12.3 shows the C function for calculating and returning the
cube of a double precision value x.
Example 12.3. A simple C program
The following C function takes a double precision vector x and returns
its cube back to the calling routine. Note that the variables are always
referenced by pointers. Note also that the return value of a C function is
always lost. All results need to be returned via the argument list.
#include <stdio.h>
void cubic(double *x, double *xcube, long *n)
{
int i;
printf("program cubic started.\n");
for(i=0; i < *n; i++)
xcube[i]=x[i]*x[i]*x[i];
return;
}
Example 12.4 shows a function for passing the value x from the S-Plus
environment to the C function and returning the cube of x back to the
S-Plus environment; that is, after linking the function to the session, there
is no visible dierence between an internal call and this external function
call.
374
12.2.6
Batch Jobs
Batch jobs are especially useful for multitasking systems. For example, if
you want to run a simulation program overnight or if you want to leave without staying logged on, you can run an S-Plus program in the background
using a batch technique.
A batch job consists of nothing more than a sequence of commands
that the interpreter executes one after another. As this is a noninteractive
technique, you do not need to and can not intervene. The output is not
written to the terminal but to an output or log le.
There are only a few steps to follow to run an S-Plus batch job.
375
376
Note Do not run two S-Plus sessions accessing the same data directories,
such as an interactive session and a batch job working on the same data.
If you work extensively with S-Plus in batch mode, consider using
SBATCH. It provides verbose logging, a much more extensive logging of
the ongoings while processing the commands. Under UNIX, use
Splus SBATCH inputle
Just enter Splus SBATCH to see the available options. Under Windows use
Splus SBATCH -input inputle
Under Windows, for faster execution, you can also specify -console to tell
S-Plus not to start up the GUI. Note that under Windows you specify
/BATCH but SBATCH (without the leading slash).
12.2.7
Libraries
377
S-Plus comes with some preinstalled libraries. If you use the command search, S-Plus displays the list of paths that are automatically
searched. The commands searchPaths and getenv("SHOME") complete
the information on full directory paths.
When you reference a variable, the directories are searched in this order.
Only if the referenced variable is not found in any of these directories is an
error message issued. Note that the directory .Data in the current directory
shows up in position 1. All global objects are stored here.
To add a new library to the current session, you enter the command
> library(library-name)
where library-name is a directory in the library directory. Enter
> library()
to see the list of currently available libraries.
To install a library, the administrator creates a directory with S-Plus
data sets in the S-Plus library directory. This library can then be accessed
by all users.
If you want to see what is in a specic library, enter
> objects(libraryname)
Use search to see all libraries currently attached.
> search()
To write something to the library, use
> assign("name", data, where=n)
where name is the variable name (in quotes), data is the data (a valid
S-Plus expression), and n is the number of the search path list.
If you want to access a data directory in a specic location, use attach,
as in
> attach("/users/andreas/Sproject")
Note Windows systems specify path names with backslashes (\) as a separator. From within S-Plus, forward slashes (/) can be used to replace
backslashes. If you insist on backslashes, use a double backslash (\\), as
a single backslash indicates a special character. To access the directory
C:\SPROJECT, use
> attach("c:\\SPROJECT")
The S-Plus distribution includes a set of libraries. Some of those libraries
listed are already available on start-up, such as trellis or nlme3.
Not all the libraries are supported by Insightful, as indicated by a message that is displayed if you access the library. As an example, the library
378
maps provides a function map that creates a map with the state boundaries of the US. Another library, missing, provides a collection of routines
that deal with missing values and (multiple) imputation, provided by Joe
Schafer. To get more information on a particular library, you can enter
> library(help=libraryname)
12.3 Factors
A factor object is an object with elements that are not numeric but categorical. Examples of categorical data are colors (red, green, blue, yellow, ...)
or sex (male, female). Ordered categorical variables are factors, too, with
the additional constraint that the categories have an inherent ordering, as
with age categories (young, middle-aged, senior) or lengths (short, normal,
long).
S-Plus has a class for factor variables called factor. Ordered categorical
variables can be assigned the class ordered, which is an extension of the
class factor. Every object of class ordered is also a factor.
Using factor objects can be a little confusing under certain circumstances.
Some functions do not expect factors and can behave unexpectedly on being
supplied factors accidentally. Other functions expect factors, and if they are
supplied with non-factor objects, they attempt to convert them into factors.
This operation might result in an unintended result or an error.
Of course, factors as an object class of their own serve a purpose. First of
all, statistical modeling functions will recognize them and act accordingly.
Furthermore, in particular for large data, it is a much more ecient way
of storing the data compared to storing them as character strings.
We will shed some light on factor objects to better understand whats
going on.
12.3.1
Let us dene three objects, one of class character, one factor, and one
ordered. We set a seed (a current value or starting value) for the random
number generator before sampling. This ensures that exactly the same
sample is generated if you try out the commands below (see the note on
page 223).
> set.seed(1234)
# random no. generator seed
> x <- sample(c("cheap", "acceptable", "expensive"),
10, replace=T)
> x
"acceptable" "cheap"
"expensive" "acceptable"
"expensive" "expensive" "expensive" "cheap"
"acceptable" "acceptable"
12.3. Factors
379
expensive
cheap
acceptable expensive
acceptable acceptable
expensive
cheap
acceptable expensive
acceptable acceptable
380
expensive
expensive
acceptable cheap
cheap
12.3.2
Using the default instead of class-specic methods can often give insights
to internal representations of objects. The default print function is called
print.default, and we can apply it to the object z.
> print.default(z)
2 1 3 2 3 3 3 1 2 2
attr(, "levels"):
"cheap"
"acceptable" "expensive"
attr(, "class"):
"ordered" "factor"
The factor object z is stored as a vector of integer numbers 1, 2, and 3.
It has an attribute levels (containing the levels cheap, acceptable, and
expensive, in that order) and an attribute class containing the classes to
which the object z belongs (ordered and factor, in that order).
Can you see now what we did before by assigning new values to the levels
of z and why the data changed instead of the order of the levels?
One might wonder if we can use those numeric representations for calculations. Trying it out on the three objects we already generated shows
slightly dierent behavior:
> 2*x
12.3. Factors
381
12.3.3
Factor levels are used by certain functions in S-Plus. If data are summarized in a table, the order of the table elements is determined by the
order of the levels of the object. Therefore, the tabulation of y and z diers
slightly.
> table(y)
acceptable cheap expensive
4
2
4
> table(z)
cheap acceptable expensive
2
4
4
Trellis-type graphs use the order of the levels as well. For the barley data
set, the levels of the year are assigned in reverse order.
> levels(barley$year)
"1932" "1931"
This is why if you plot the barley data set with year as conditioning
variable, 1932 is shown before 1931. Check the graph you get on entering
> dotplot(site yield | year, data=barley)
You should be able now to make a copy of the data set and reverse the
factor levels such that 1931 is shown before 1932.
Note Recall that you do not see the levels of a factor variable by printing
out the raw data. There can be empty factor levels with no corresponding
observation, and the order of levels is not visible either. The order of levels
becomes relevant when the data is processed, for example in tabulating it
or in displaying it graphically.
382
12.3.4
The following illustrates some situations where factors might not behave
the way you want them to behave.
Using the paste Function
The function paste is a frequently used tool. It can be used to generate
row or column labels for the data or graph titles. We will use the paste
function to illustrate the dierent results it generates for character, factor,
and ordered objects.
> paste("x is", x)
"x is acceptable" "x is cheap"
"x is acceptable" "x is expensive"
"x is expensive" "x is cheap"
"x is acceptable"
"x is expensive"
"x is expensive"
"x is acceptable"
12.3. Factors
383
"y is expensive"
"y is expensive"
"y is acceptable"
384
> xdata
one two three
1
2
3
> xi <- factor(c("one", "two", "three"))
> xdata[, xi[1]]
1
> xdata[, xi[2]]
3
You can probably guess why xdata[, xi[2]] yields the value 3 and not 2.
It is because xi[2] is internally represented as 3. The factor level 2 shows
up in position 3 of the levels and to subscript the data frame, the numerical
value 3 is used and not the string "two".
> levels(xi)
"one" "three" "two"
So if you want to access data using the row and column names, make sure
that you dont use a factor variable for subscripting (or convert it to integer
values beforehand).
12.3.5
Factors are often generated when reading or manipulating data sets. For
example, when reading data using the graphical user interface (with the
default settings), S-Plus will recognize certain data as factor objects. If
the functions importData, openData, or the older read.table are used, the
defaults are set to convert string variables into factors. If you construct a
new data frame using data.frame, character variables will become factors
in the data frame.
As this might happen without explicitly noticing, it might be preferable
if the variables are not stored as factors but rather as characters. They can
still be converted later if necessary. As a general rule of thumb, factors are
preferable if Trellis graphs and statistical modeling functions are used. If the
major aim is to manipulate the raw data, characters might be preferable.
In general, manipulations using characters will be faster than those using
factor objects.
To prevent conversion to factors, use the settings stringsAsFactors=F
for importData and as.is=T for read.table. Have a look at the documentation for the individual functions. The menu system of S-Plus for
Windows allows you to change the default class for objects from factor to
character. For creating a data frame with character data, use data.frame
with I(x) as argument, where x needs to be replaced by the object to
include in the data frame (example: data.frame(I(letters))).
You should be aware that there are some side eects. As an example, the
summary function treats character objects dierently from factor objects.
12.3. Factors
385
y
acceptable:4
cheap:2
expensive:4
z
cheap:2
acceptable:4
expensive:4
The summary function displays a table of values for the factor object and
some information about the objects internals, length and class, for the
character vector as if it does not really know what to do with it.
12.3.6
The levels of a factor are an attribute of the object. They are not revised
when changing the data of the object. For the objects as dened above,
the valid levels are cheap, acceptable, and expensive. So, if we assign something else to an element of the object, it will not be accepted as a proper
denition:
> y[1] <- "very expensive"
Warning messages:
replacement values not all in levels(x):
NAs generated in: y[1] <- "very expensive"
> y
NA
cheap
expensive acceptable expensive
expensive expensive cheap
acceptable acceptable
If we want to add a further level to the objects levels, we need to use
> y <- factor(y, levels=c(levels(y), "very expensive"))
386
Now assigning the value "very expensive" to y[1] will generate a valid
value.
> y[1] <- "very expensive"
> y
very expensive cheap
acceptable
expensive
expensive
cheap
acceptable
expensive
expensive
acceptable
In a similar fashion, if all elements of a certain value are dropped, the level
still lives on in the levels attribute.
> y1 <- y[y!="cheap"]
> levels(y1)
"acceptable" "cheap" "expensive" "very expensive"
Although "cheap" no longer exists in the data set, the level "cheap" still
exists (and will be used, for example, by table(y1)).
This can make sense, of course, as we might have a sample that just
by chance does not contain a certain observation, but we still might want
to document a factor level that has not been observed. Sometimes, for
example for generating graphs, it is better to drop an empty factor level
unless you really want an empty graph (reecting no data in that factor
level). A straightforward way of doing this is to redeclare the object as a
factor.
> y1 <- factor(y1)
> levels(y1)
"acceptable" "expensive" "very expensive"
If you use Trellis graphs, you might also use the subset option to exclude
the empty factor level (along the lines of subset=(y!="cheap")).
You can experiment for yourself with the barley data set. Create a copy
of the data set in your working directory by entering
> barley <- barley
Now, drop the data for the site Morris and remove Morris as a factor level.
Have a look at the function factor to examine how it works and why
unused levels are dropped.
If you want to continue thinking about factors, jump to the exercises
(p. 392) right away.
387
12.4.1
If you work with S-Plus for Windows, the simplest way of transferring a
graph from an S-Plus graph sheet to another application is by using copy
and paste (activate the S-Plus graph sheet, press Ctrl-C, move to MS
Word, and paste with Ctrl-V). The only potential drawback with this
method is that it is not automated. If automation of the graph creation is
desired, follow the procedure outlined below and save the graphs to les,
for example in WMF format.
The WMF (Windows MetaFile) format is an output format that can be
generated on S-Plus for Windows (using graphsheet) and also on S-Plus
for UNIX (using wmf.graph). Do not specify a particular size of the graph
but rescale it within your application.
If you intend to create a PowerPoint presentation and have S-Plus for
Windows, use the PowerPoint wizard from the menu. It allows selection
of multiple graphs and creates a PowerPoint slideshow from all selected
graphs.
Note If you want to create many graphs without giving explicit names to
each of the les, S-Plus can create a series of les automatically. Instead
of specifying a le name such as gure.wmf, specify a le name with the
character # in it.
For example, if you call
> graphsheet(file="figure##.wmf", format="WMF")
the rst graph is saved to the le gure01.wmf, the second to gure02.wmf,
and so on. Just make sure to include multiple # signs if you intend to create
many graphs. The same technique works for all other formats, including
PostScript.
Under some rare circumstances, we found that minor details such as
axis tickmarks can be awkward using copy and paste. For those cases, we
recommend you use the WMF format.
Most Windows applications (including MS Word) can also include
PostScript graphs. You just need to gure out how to insert the graph
le into your Word or WordPerfect document, your PowerPoint slides, or
any other document. If the PostScript graphs do not contain a preview,
you will not see the graphs but a placeholder until you print them out
(on a PostScript printer). If you dont have a PostScript printer, refer to
Section 12.4.4.
388
12.4.2
PostScript graphs are of high quality. They are the standard for professional
typesetters. Encapsulated PostScript (EPS) ensures that a graph can be
integrated with other documents as an object of known size. EPS graphs
can be used with almost any word processor.
To generate an EPS le using S-Plus for Windows, use
> graphsheet(file="graphlename.eps", format="EPS")
On UNIX systems, use
> postscript(file="graphlename.eps", horizontal=F,
onefile=F, print.it=F)
Now, enter a command to generate a graph and close the device, so that
the le is completed:
> dev.off()
You can alternatively create the graph on a screen device and copy it to a
PostScript le when it is nished. Use (for UNIX)
> printgraph(method="postscript", file="graphlename.eps",
horizontal=F)
or (for Windows)
> dev.copy(graphsheet, format="EPS",
file="graphlename.eps")
> dev.off()
# close the device
If you have a PostScript previewer such as GhostScript/GhostView (see
the references), you can look at the PostScript graph on the screen.
An Encapsulated PostScript graph is essential to ensure easy integration
of the graph into a text document. Encapsulated means basically that the
graph le contains information about the size of the graph. Word processors
can derive all necessary information to integrate and resize the graph from
the le.
PostScript les are plain text les that you can look at. An Encapsulated
PostScript graph contains header information such as the following:
%!PS-Adobe-3.0 EPSF-3.0
%%Title: (S-PLUS Graphics)
%%Creator: S-PLUS
%%For: (Andreas Krause)
%%CreationDate: Sat Apr 9 11:37:01 2005
%%BoundingBox: 20 170 592 622
The rst line gives information about the EPS (Encapsulated PostScript)
level, and a few lines later, the bounding box denes the size of the overall
graph.
389
Note A graph le should not contain more than one graph. A multiplegraph or even multiple-page document is dicult if not impossible to handle
as a (single) object.
The graph size should not be specied from within S-Plus. Creating
a full page graph and resizing it within the text processing system gives
better results.
12.4.3
390
}
Note that the gure is automatically scaled to the documents text width.
The height is scaled proportionally to the width. The gure can now be
included into the document.
\Figure{Text underneath the figure\label{fig nicegraph}}%
{The short title for the list of figures entry}%
{graphlename.eps}
The gure is referenced by referencing the label:
Figure\ref{fig nicegraph} (page\pageref{fig nicegraph})
You need to run LATEX twice, once for writing the new gure and label to
les, and once for resolving the reference.
Finally, there are many people working with LATEX as a text processor
and S-Plus as statistics and graphics software. Therefore, there are many
tools available interfacing the two. For example, on the StatLib server (see
Section 14.3, p. 414), there is an S-Plus function for writing S-Plus data
sets directly to a LATEX table environment.
12.4.4
12.4.5
If you need to add Greek letters or even formulas to your graphs, there are
some ways of doing it.
If you run S-Plus for Windows, you can create a graph and edit it. You
might need to convert the graph to objects rst by clicking with the right
mouse button on the graph and selecting Convert to objects.
Select the text to change and change the font to symbol. You can do so
by using the context menu (right-click on the object) and selecting Font
from the Format menu. A p will result in a lowercase pi () and a P in
an uppercase pi (), an s will give a sigma (), and an S will generate an
uppercase sigma ().
391
The Greek (symbol) font and other fonts cannot be combined within a
single object such as a title, so that if you want to mix Greek and Latin
letters, you need to create two text objects and align them against one
another.
If you run S-Plus on a UNIX system, you can use the symbol font as
well. Use
> text(xpos, ypos, text, font=13)
to add Greek letters to your graph at a specic position.
This method can be used for axis labels and other string positioning as
well. As under Windows, you cannot mix Greek and Latin characters. If
you intend to do so, check the StatLib library for contributed functions
from other S-Plus users.
If you use TEX/LATEX as the word processor, consider using the psfrag
utility, available from any CTAN archive. Using this utility allows TEX
formulas to be typeset within graphs.
392
12.5 Exercises
Exercise 12.1
Display the yield in the data set barley in histogram form. Use a Trellistype graph, use one panel per site, and arrange the sites such that the
order corresponds to the maximum yield per site. Check your graph for the
correct order of sites.
Exercise 12.2
We look at dierent behavior of objects of classes character, factor,
and ordered. Take the vectors x, y, and z as generated on page 380:
> set.seed(1234)
> x <- sample(c("cheap", "acceptable", "expensive"),
10, replace=T)
> y <- factor(x)
> z <- ordered(x, levels=c("cheap", "acceptable",
"expensive"))
Printing them gives
> x
"acceptable" "cheap"
"expensive"
"expensive" "expensive" "expensive"
"acceptable" "acceptable"
"acceptable"
"cheap"
> y
acceptable cheap
expensive expensive
expensive
cheap
acceptable expensive
acceptable acceptable
acceptable cheap
expensive expensive
expensive
cheap
acceptable expensive
acceptable acceptable
> z
12.5. Exercises
393
> x==y
> x==z
> y==z
Note that if S-Plus attempts to compare objects of dierent class, it attempts to convert objects so that they belong to the same class before
carrying out the comparison. Write down your answers before looking at
the solution.
Exercise 12.3
Write a class-specic function summary for objects of class character. It
should behave identically to the function summary when applied to a factor.
For illustration, compare the dierence between the output of
> summary(barley)
and
> summary(barley2)
where barley2 is dened by
> barley2 <- barley
> barley2$site <- as.character(barley2$site)
Exercise 12.4
Start a batch job that does the following. Generate 1000 standard normal
random numbers. Store the random numbers in a variable. Calculate the
mean and variance of the sample and print it out. Create a plot of the
data and send it to your printer. Now, start up S-Plus and interactively
calculate the mean and variance of the random sample you generated by
the batch job. Verify that you obtain the same results.
Exercise 12.5
Write a function that creates a 2 2 layout for graphs, doubles the character size of text in the graph, creates 4 histograms of 100 newly generated
standard normal random numbers, and adds titles to the histograms. The
number of samples to generate should be a parameter to the function,
which by default set to 100. Be sure that after the function exits, all graphics parameter settings are the same as they were before you started the
function.
Verify this by creating another graph after the function is executed and
add a title to it.
394
12.6 Solutions
Solution to Exercise 12.1
We are going to display the yield in the Barley data set as histograms,
one per site. The sites are ordered according to their respective maximum
yield.
We generate a new site vector with the same data but factor levels
in a dierent order, according to the site maximum yields. The function
reorder.levels serves exactly that purpose (look up the help page!).
First, we copy the data set barley to a new object.
> barley.extended <-
barley
12.6. Solutions
Waseca
395
50
40
30
20
10
0
Crookston
50
40
30
20
10
Percent of Total
Morris
50
40
30
20
10
0
University Farm
50
40
30
20
10
0
Grand Rapids
50
40
30
20
10
0
Duluth
50
40
30
20
10
0
20
30
40
50
60
yield
Figure 12.3. Histograms of Barley yield by site. Sites (factor levels) sorted in
ascending order of maximum yield.
396
12.6. Solutions
397
site
Grand Rapids:20
Duluth:20
University Farm:20
Morris:20
Crookston:20
Waseca:20
> summary(barley2)
yield
variety
Min.:14.43333
Svansota,No. 462:24
1st Qu.:26.87500
Manchuria:12
Median:32.86667
No. 475:12
Mean:34.42056
Velvet,Peatland:24
3rd Qu.:41.40000
Glabron:12
Max.:65.76670
No. 457:12
Wisconsin No. 38,Trebi:24
398
year
1932:60
1931:60
site
Length:
120
Class:
Mode:character
Converting an object of class character into class factor would give the
desired result:
> summary(as.factor(barley2$site))
Crookston Duluth Grand Rapids Morris
20
20
20
20
University Farm Waseca
20
20
We therefore set a new method that simply converts a character object into
a factor object and calls the corresponding summary function.
> setMethod("summary", signature(object="character"),
function(object, ...) summary(as.factor(object)))
Look up the chapter on object-oriented programming (chapter 10, pp. 323
.) to refresh how to set class-specic methods. Testing the revised routine
to handle summaries of character objects gives the desired result.
> summary(barley2)
yield
Min.:14.43333
1st Qu.:26.87500
Median:32.86667
Mean:34.42056
3rd Qu.:41.40000
Max.:65.76670
year
1932:60
1931:60
variety
Svansota,No. 462:24
Manchuria:12
No. 475:12
Velvet,Peatland:24
Glabron:12
No. 457:12
Wisconsin No. 38,Trebi:24
site
Crookston:20
Duluth:20
Grand Rapids:20
Morris:20
University Farm:20
Waseca:20
12.6. Solutions
399
400
13
S-Plus Internals
This chapter will go into detail about how S-Plus works and how you can
take advantage of it. We will look at how S-Plus starts up, how it nds
and stores data, how the environment can be customized, and how data
are passed between functions.
We start with a description of where everything is stored and kept, and
how it is customized. As the setups for the UNIX and the Windows versions
dier signicantly, we treat them separately.
402
13.1.1
The working chapter is the chapter that is used for storage of all objects
created during a session. If the start-up directory is a valid chapter, it
becomes the working chapter. The working chapter is the rst element
in the search path. All queries for objects will be performed here rst,
and all assignments are made to this chapter (unless explicitly requested
otherwise).
If the current directory is not a valid chapter, S-Plus checks if an environment variable S WORK is set. If the directory specied in the environment
variable is found, S-Plus checks if it is a chapter. If so, S-Plus uses the
directory as its working chapter. If it is not a chapter (if no directory .Data
is found), it uses the directory named to store objects.
Finally, S-Plus checks if the log-in directory (the home directory) is an
S-Plus chapter. If you have a .Data from an old S-Plus version in your
home directory, rename it before you start up a newer S-Plus and convert
it using ConvertOldLibrary.
If everything fails, S-Plus creates a new chapter in your home directory.
You will see a message like
Creating data directory for chapter
/users/andreas/MySwork
on start-up. The temporary directory is not removed after quitting S-Plus.
Note You should keep a clear structure from the beginning by keeping
projects separate. If S-Plus has created a temporary chapter, quit S-Plus,
remove the temporary chapter, and create a project directory as described.
This mechanism diers somewhat from the behavior described in
Chambers (1998).
13.1.2
You can congure S-Plus to do specic things on start-up and exit. There
are many possible applications. You can attach libraries, open a graphics
device automatically, change the prompt, or remind yourself of something
by giving out a message.
Some ways of customizing S-Plus on start-up are given in the following.
.S.chapters
You can create a le of name .S.chapters in your home directory. The le
can contain names of libraries, one per line, that should be loaded on startup. These libraries will be attached to any of your sessions, no matter where
a session is started.
403
.S.init
The directory from which you start S-Plus can contain a le .S.init. The
contents will be interpreted as S-Plus commands that are executed on
start-up.
Try it out by creating a le .S.init that opens a graphics window when
S-Plus is started.
.First.local, .Last.local
If a function .First.local exists anywhere in the S-Plus search path,
it is executed prior to .First. The intended usage of .First.local and
.Last.local is to perform specic tasks systemwide (i.e., for all users of
the site).
S FIRST
If an environment variable S FIRST exists, its contents is interpreted as
S-Plus commands. Setting environment variables depends on the shell
you use.
.First
After loading S-Plus, the system searches for a .First function. If it is
found, S-Plus executes the function.
.Last
When the session is ended, S-Plus searches for a .Last function to execute,
just as with the .First function at start-up time.
.First.lib and .Last.lib
If you install your own libraries and want to have an equivalent of .First
and .Last on opening the library, look at the functions .First.lib and
.Last.lib.
Note Storing start-up customization in .S.init is preferable to having
a .First function or a global .S.chapters setting. If there is a problem
with any of the start-up commands, it is easy to rename .S.init to avoid
execution, and it can easily be copied to other chapters.
Of course, .S.init can contain denitions of .First and .Last.
Note that the behavior described is partially dierent from the
description in Chambers (1998).
404
13.2.1
405
vironment. If those directories are not found, S-Plus will prompt you on
start-up to ask if you want them to be created. You can copy the preferences directory from another project if you prefer to keep them. You can
also point S-Plus to use the preferences from another directory. Use the
S PREFS command line option in this case, for example by calling
SPLUS.EXE S PROJ="c:\thesis\draft1" S PREFS="c:\thesis\.Prefs"
This enables use of the same preferences for several dierent projects.
S-Plus can be instructed to execute certain commands right on startup. There are several ways of doing it. Let us open a graph sheet and greet
the world on start-up. The command line option solution would be
SPLUS.EXE S FIRST=graphsheet(); cat("Hello world.\n")
The commands to execute could be in a le. In this case, specify the le
from which to retrieve the commands:
SPLUS.EXE S FIRST=@startuplename
13.2.2
406
.Last
.Last is the counterpart of .First for exiting. When S-Plus is quit, you
can place some cleanup work in .Last, or just wish yourself good-bye.
.First.local and .Last.local
If you are system administrator and want to install functions that work
as .First and .Last for every user on the system, have a look at the
description of .First.local and .Last.local.
.First.lib and .Last.lib
If you install your own libraries and want to have an equivalent of .First
and .Last on opening the library, look at the functions .First.lib and
.Last.lib.
13.2.3
The S-Plus for Windows script window has some interesting details.
Understanding them can help in practical circumstances.
Basically, if you select some text in the script window and click the run
button, the marked text is executed. If no text is selected, everything in the
script window is executed. The execution takes place in a way that is very
similar to using the function source. You can as well imagine this as if you
entered everything on the command line with brackets ({ and }) around it.
If an error occurs anywhere in the execution of the code, you are almost
always put back into the state that the system was in before executing the
code. All objects generated along the way and all calculations are lost. On
the positive side, the system is not left in an uncontrolled state by just
stopping the execution of the code somewhere along the line.
This behavior implies that all objects created are kept in memory until
the end of the code block is reached. In particular if you work with large
data sets or generate a large amount of output and objects, it might be
considerably faster if you send the commands to S-Plus via the command
line. You could copy and paste the commands directly, or you could use the
interface from the operating system and enter (at the Windows or UNIX
command prompt)
Splus < spluscode.ssc > output.txt
Note Note the dierence between using the play button for the script
window and entering the commands on the command line: using the script
window, if an error occurs, your system will be set back to the state it was
in before the start of the execution. If you copy and paste a whole block of
commands to the command line and an error occurs, the execution of the
407
408
409
What will happen if you enter g at the prompt now? Is the value of store.x
known inside the function g ?
The variable store.x inside the function f, as well as the variable x
inside g, which is passed to g from the calling function, are so-called local
variables. They (or their values) are not known outside the functions in
which they are declared. Such an environment, a closed function together
with all values known within its scope, is called a frame.
If you want to access an object outside the current frame, use get
and specify the frame in which to search. For example, a local function
g within a function f can access objects in the frame of f by specifying frame=sys.nframe()-1; that is, the current frame minus 1 (one level
above). Consider though whether that object should not be passed as an
argument. Here is an example:
f <- function(x=33)
{
g <- function()
{
z <- get("x", frame=sys.nframe()-1)
print(z)
}
g()
}
The better solution would be:
f <- function(x=33)
{
g <- function(z)
{ print(z) }
g(z=x)
}
Note If an expression is not completed successfully, nothing at all is stored.
S-Plus restores the state of the system exactly to the state it had before
the expression was entered.
This is especially important if you intend to start a long simulation. If
your function contains an error that shows up after some minutes or even
hours of run time, no results are stored. So, run a shorter version of such a
simulation to test it out before you actually run the longer one.
Note Good programming style avoids the use of global variables. For example, if your function needs a variable that is declared in the interactive
environment, it will no longer work if the variable is assigned a new value
or if you give this function to somebody else (without the external variable
the function needs).
410
13.5 Exercises
Exercise 13.1
Create the directory Scourse and a corresponding data directory for
S-Plus. Start S-Plus for the rest of this course from the new directory. Check if it works by starting S-Plus and determine what objects
are already dened in this data directory.
Exercise 13.2
Write a function for calculating the coecient of variation. It is dened as
the standard deviation divided by the mean of the data.
The function shall take any number of arguments such that it can be
called as
> coeff.var(x)
or
> coeff.var(x, y, z)
to calculate the coecients of variation for x, y, and z individually. In
analogy to the function mean, it should have an optional parameter na.rm
(by default, F) such that missing values (NA) are removed prior to any
calculation.
Check if all data are numeric. Write a local function (inside coeff.var)
to calculate the coecient of variation for a single vector.
13.6. Solutions
411
13.6 Solutions
Solution to Exercise 13.1
If you work on a UNIX system, create a new project directory, change the
working directory to it, and initialize it as a chapter:
mkdir Scourse
cd Scourse
Splus CHAPTER
Creating data directory for chapter
S chapter initialized.
Then, start S-Plus from here. The next time you log on and want to
continue working on this project, switch to the directory (using cd) before
starting up S-Plus.
Under Windows, create the new directory. There are many ways of doing
it, for example by right-clicking in a window and selecting New and
Folder. Then, proceed as described in Section 2.8, page 23.
Start S-Plus to verify that the setup works.
412
14
Information Sources on and Around
S-Plus
14.1 Insightful
A good start for information about a software product is the Internet home
page of the developer and vendor, for S-Plus
http://www.insightful.com/
Insightful maintains various very useful information pages, among them
a list of frequently asked questions (FAQs) that is searchable, a list of books
on S-Plus, white papers, and more. Insightful has also started to run Webbased seminars around S-Plus. The presentation can be followed by just
using a Web browser. Previous web casts are archived and can be replayed
from the Web page.
You might also be interested in other products around S-Plus that are
oered, for example the Insightful Miner for data mining, the S-Plus Array
Analyzer for the analysis of microarray data, or the environmental statistics
package.
414
415
archives, functions, etc. The preferred way to use StatLib is via the Web.
The StatLib home page can be reached at
http://lib.stat.cmu.edu/
Take a look yourself and you might nd many interesting things on these
pages.
15
R
In the 1990s, two authors from New Zealand, R. Gentleman and R. Ihaka,
wrote a program they called R. The name already suggests that R is somewhat related to S and S-Plus, respectively. The development is described
in Ihaka and Gentleman (1996).
Since 1997, a core team of developers continuously improves the system.
The system R has a syntax like S. It is available under the terms of the
GNU GPL (General Public License), which is provided with the software
or can be found at http://www.gnu.org/.
You can download the system from the Web free of charge (it comes with
some Linux distributions, too), including the full source code or in binary
form for popular systems such as Windows, Linux, or Macintosh (see the
R home page, http://www.r-project.org/).
As you can easily download and install the system and take a closer look,
we will focus here on some of the dierences between R (Version 2.0.1) and
S (S-Plus Version 7.0, beta release).
Over the years, R has become a fully edged environment that is widely
used. If you nd yourself or your institution adopting the system, you
should note that you are using a system that well exceeds many commercial
applications in many respects. You might want to consider supporting the
R Foundation as a donor or as a member (from as little as EUR 25 per
year for natural persons and EUR 250 per year for institutions). Details
are available from the R home page.
418
15. R
15.1 Development
R is developed by a core team and many more authors around the world
who contribute libraries (packages in R). Many of the popular S-Plus
functions are available for R, too (Therneaus survival library, Venables
and Ripleys MASS library, Pinheiro and Bates nonlinear mixed eects
library, and many more).
The development, announcements of new releases and packages, and
discussions take place via electronic repositories and forums. There is a
mailing list similar to S-News for general discussions and separate lists
for announcements and developers discussions. All archives are available
online.
R places much of the functionality that is readily available in S-Plus
into libraries that are automatically installed or that need to be downloaded from CRAN and made accessible by entering, for example,
library(lattice) for the lattice package. The CRAN (Comprehensive
R Archive Network) contents has grown at an impressive pace and contains a large collection of typically good quality packages. Some of the
larger packages have by now grown into separate projects, the most
prominent example being Bioconductor (http://www.bioconductor.org/),
a bioinformatics collection of tools for the analysis of genomic data.
CRAN and Bioconductor are both linked from the R project home page,
http://www.r-project.org/.
15.3.1
419
Language
420
15. R
Make your own decision on what you prefer. If you want to keep your
code somewhat compatible for both systems, follow the S-Plus convention
and declare each object that needs to be accessed explicitly.
Note Well, yes, you can mimick the R behavior in S-Plus if you insist.
Here is one way of doing it:
f <- function(x)
{
g <- function(x=get("x", frame=sys.nframe()-1))
return(x2)
return(g())
}
The trick here is to dene x in g per default to get the object of name
x from the frame above (the body of the function f). This is achieved by
setting the evaluation frame for get to the current frame minus 1 by setting
frame=sys.nframe()-1.
15.3.2
Libraries
15.3.3
Trellis-Type Graphs
The Trellis graphs are largely implemented in the package lattice based
upon the new grid graphics library. One can use the same formula-based
421
interface, too. You will nd that the layout diers slightly, character sizes
are a little bit dierent from R to S-Plus, there is no bar in the panel strip
per default, colors dier, and more. The functionality is basically the same
though. In R you need to enter library(lattice) before the functions
become available.
15.3.4
15.3.5
15.3.6
Memory Handling
15.3.7
R oers the option to include mathematical formulas, fractions, Greek characters, etc. in graphs in a TEX-like notation. A review of this functionality
is given in Murrell and Ihaka (2000). See the help page for plotmath in R.
15.3.8
S-Plus comes with a graphical user interface for Windows and UNIX. For
R, there are several related projects. Work is currently ongoing to develop
a GUI based on TCL/TK, Java, GTK, and GNOME.
422
15. R
15.3.9
Start-Up Mechanism
15.3.10
Windows Integration
S-Plus for Windows has some links to other Windows applications: it has
links to Excel (import, export, data browser) and PowerPoint (automatic
slide show generation) and writes WMF (Windows MetaFile) graphics les
(the latter are also available in UNIX versions).
R provides an interface to Windows applications to link to them via a
COM interface.
If you create many Powerpoint slide shows, the S-Plus Powerpoint wizard is very convenient: it creates a slide show of all your graphs on the
screen with a few mouse clicks. If you use R, you will possibly end up
exporting graphs to les and importing them one by one into Powerpoint.
15.3.11
Support
S-Plus support can be obtained from the vendor and via the discussion list
S-News (described in Section 14.2 on page 414). Both sources are typically
very responsive. Via a maintenance agreement, you get the latest S-Plus
updates automatically.
R users rely on the discussion list, which is almost always very responsive
and helpful. Updates can be downloaded using Web browsers or via a call
to functions like update.packages within R.
More details about dierences between the systems can be found in the
R FAQ pages.
http://cran.r-project.org/doc/FAQ/R-FAQ.html
the general FAQ list for R (maintained by Kurt Hornik)
http://www.stats.ox.ac.uk/pub/R/rw-FAQ.html
Windows-specic FAQs (maintained by B.D. Ripley)
http://cran.r-project.org/bin/macosx/RMacOSX-FAQ.html
Mac OS X-specic FAQs (maintained by Stefano M. Iacus)
15.4. Summary
423
15.4 Summary
It should have become clear that the systems, both implementing the language S, are very similar. In general, knowing one system will easily get you
started in the other. For most end users, dierences are minor if visible at
all. There are some dierences in various places though and maintaining a
library in both systems can therefore generate extra eort in the long run.
For the purpose of this book we focus on S-Plus, partially to avoid a
number of sections that would otherwise read if you use S-Plus then
and if you use R then. However, with few exceptions the book can be
used equally well as an introduction to the R system.
16
Bibliography
The print references are given rst. Complementary Web pages are provided
with the reference. References that exist only online are given in a separate
section.
On-line references can become outdated very fast. We had to update
about half the references for the current edition, and by the time of this
printing, a few references might already have changed. If you cant nd
information under the given address, you might try to nd the new address or further information by using the popular search tools Yahoo!
(http://www.yahoo.com/) or Google (http://www.google.com/). If you are
looking for the home page of a book or a document with a specic title,
try rst to enclose the title in double quotes. The search engine will then
search for pages containing exactly this string (instead of just the individual
words).
426
16. Bibliography
427
Lamport, L., 1985. LATEX A Document Preparation System. AddisonWesley, Reading, MA.
Millard, S., and Krause, A. (eds.), 2001. Applied Statistics in the Pharmaceutical Industry (with Case Studies Using S-Plus). Springer-Verlag, New
York.
Web page: http://www.elmo.ch/doc/statistics-in-pharma/
Murrell, P., and Ihaka, R., 2000. An approach to providing mathematical
annotations in plots. Journal of Computational and Graphical Statistics,
9(3): 582-99.
Pinheiro, J.C., and Bates, D.M., 2000. Mixed Eects Models in S and SPlus. Springer-Verlag, New York.
Web page: http://cm.bell-labs.com/cm/ms/departments/sia/project/nlme/
MEMSS/
Therneau, T.M., and Grambsch, P., 2000. Modeling Survival Data: Extending the Cox Model. Springer-Verlag, New York.
Web page: http://mayoresearch.mayo.edu/mayo/research/biostat/therneaubook.cfm
Tukey, J.W., 1977. Exploratory Data Analysis. Addison-Wesley, Reading,
MA.
Venables, W.N., and Ripley, B.D., 2002. Modern Applied Statistics with
S-Plus, 4th ed. Springer-Verlag, New York.
Web page: http://www.stats.ox.ac.uk/pub/MASS4/
Venables, W.N., and Ripley, B.D., 2000. S Programming. Springer-Verlag,
New York.
Web page: http://www.stats.ox.ac.uk/pub/MASS4/Sprog/
428
16. Bibliography
429
StatLib
The StatLib archive containing a large S-Plus archive and much more.
http://lib.stat.cmu.edu/
Trellis home page
Trellis displays in all details.
http://cm.bell-labs.com/cm/ms/departments/sia/project/trellis/
16.2.2
TEX-Related Sources
CTAN overview
ctan Comprehensive TEX Archive Network.
http://ctan.tug.org/
CTAN archive
One of the CTAN archives in the UK.
http://www.tex.ac.uk/
German users group
WWW and ftp server of the German TEX Users Group, in German.
http://www.dante.de/, ftp.dante.de
16.2.3
Other Sources
Adobe Acrobat
Acrobat reader for PDF les.
http://www.adobe.com/acrobat/
GhostScript and GhostView
GNU archive containing the GhostScript PostScript tool.
ftp://mirror.cs.wisc.edu/pub/mirrors/ghost/
Index
Symbols
!=, 86
*, 76
**, 76
+, 7, 76
-, 76, 110
..., 186, 300, 411
.C, 372, 374
.Data, 401
.First, 403, 405
.First.lib, 403, 406
.First.local, 403, 406
.Fortran, 372, 374
.Last, 403, 406
.Last.lib, 403, 406
.Last.local, 403, 406
.Last.value, 407
.S.chapters, 402
.S.init, 403
.guiFirst, 405
/, 76
:, 81
<, 86, 392, 395
<=, 86
432
Index
A
abline, 134, 135, 254, 255, 257
abs, 93
Acrobat reader, 429
add1, 259, 260, 274
again, 365
aggregate, 200
all, 306, 411
and (&), 87
annotate palette, 15
ANOVA, 255, 261263, 269270,
278284
coecients, 262
contrasts, 262
output, 262
split-plot, 269
anova, 268, 275, 276
any, 306
aov, 255, 256, 262, 263, 269, 279
apply, 103105, 111, 116, 175, 200,
306
args, 165
arithmetic operators, 7677
array, 102
arrays, 95, 101104
syntax, 102
arrows, 134, 212
as.data.frame, 177
as.integer, 381
as.numeric, 355
ask, 360, 361
assign, 298, 377
assignment, 7779
= symbol, 77
symbol, 77
<- symbol, 77
-> symbol, 77
attach, 106, 107, 111, 210, 377
axes, 134136
axis, 134136
B
backslash in strings, 315
backspace in strings, 315
Index
433
434
Index
densityplot, 157
deparse, 302
detach, 106
dev.ask, 129
dev.copy, 129, 160
dev.cur, 129
dev.list, 129, 159, 160
dev.next, 129
dev.off, 129, 376
dev.prev, 129
dev.print, 129
dev.set, 129, 160
device, see graphics devices
dice test, 235, 244
dim, 97, 111, 112
dimnames, 97, 98, 241, 368, 419
distribution graphing, 223
distribution related functions,
220224
DLL, 372
do.call, 124
doErrorAction, 338
dotchart, 220
dotplot, 157
drop1, 260
drug.mult data set, 269
dummy.coef, 262, 263
dump, 356
dumpMethod, 328
duplicated, 118, 122
dynamic data analysis, 219
E
Eclipse, 368
ed, 355
edit, 355
editing data, 355
Emacs, 367
emacs, 355
Encapsulated PostScript, 388
equal to (==), 86, 90
Error, 269, 281
ESS (Emacs Speaks Statistics),
367
Index
435
editing, 30
gure region, 138
Greek letters, 390
GUI, 1315
in text processors, 386390
layout, 138143
Trellis, see Trellis
layout customization, 142
leave out a gure, see frame
line type, 132
line width, 132
margins, 138, 139
multiple gure layouts, 140
omitting axes, 134
orientation, 128
plot character, 132
plot region, 138
size, 139
subtitle, 132
surrounding box, 132
title, 132
type, 132
graph sheet, see also graphsheet,
12, 127
renaming pages, 369371
graphical user interface, see GUI
graphics, 125152
devices, 126
active, 126
closing, 128, 160
closing all, 128, 160
multiple, 128
options, 128
options, see par
plot options, 131
plotting data, 128131
settings
storing and restoring, 364
type option, 130
window, 127, 160
graphics.off, 129
graphs, see graph
graphsheet, see also graph sheet
graphsheet, 125, 126, 160, 319,
369, 376, 387
436
Index
iris4d, 127
is.factor, 261
is.finite, 234
is.inf, 234
is.na, 110, 116, 232, 234, 244
is.numeric, 411
iteration, 303308
J
java.graph, 126
K
KaplanMeier, 265
key, 175177
key, 175, 176, 181, 184, 185, 187,
224
kruskal.test, 228
ks.gof, 228
ksmooth, 257
Kyphosis data set, 263, 268
L
labels, 261
landscape mode, see graph
orientation
lapply, 122, 200, 290291, 345,
346
last expressions output, see
.Last.value
LATEX, 387, 389390
lazy evaluation, 299
legend
in Trellis, see key
length, 89, 92, 93, 368, 369
less than (<), 86
less than or equal to (<=), 86
letters, 190
Leukemia data set, 265
levelplot, 157, 216
levels
see factor, 381
levels, 181, 379
Index
437
M
Mac(intosh), see R, Mac
macros, 2
make.groups, 174, 175, 181
manova, 255, 256, 269
mantelhaen.test, 228
map, 378
match, 185, 186, 342
mathematical operations, 8284
MathSoft, 2
matplot, 220
matrices, 95101
addition, 99
assigning elements, 99
dening dimensions, 96
inverse of, 100
merging, 101, 112
multiplication, 100
naming rows and columns, 97
overwriting elements, 99
referencing entries, 84, 98
singular value decomposition,
100
subtraction, 99
syntax, 96
matrix, 96, 97, 354
max, 93, 104, 109, 110, 116, 117,
242
mcnemar.test, 228
mean, 89, 90, 110, 200, 232, 233,
293
median, 200, 232
merge, 110, 111
merging data, 110111
merging matrices, see matrices,
merging
mfrow, 254, 260
min, 242
missing, 301
missing values, see NA
model building, 253, 258261
model diagnostics, 253255, 260
model syntax, 256257
motif, 126, 127
438
Index
ms, 255
mtext, 240
multiple instances of S-Plus
simultaneously, 23, 375
N
NA, 109110, 116, 134, 184,
231234, 240
na.exclude, 233
na.fail, 233
na.rm, 116, 117
names, 108, 241, 288, 368
NaN, 374
negative binomial distribution, 222
new, 325327
newline character in strings, 315
next, 305
nlme3 library, 177, 377, 420
nls, 255
nls, ms, 256
no room for x axis (error message),
141
normal distribution, 222
multivariate, 222
not equal to (!=), 86
notational conventions, 6
NULL, 287, 288
O
Object Explorer, 11, 1819
object classications, 18
toolbar, 18
objects, 79, 343, 345
objects summary window, 49
Old S, 2
on.exit, 364, 400
openData, 384
openlook, 127
operating systems supported by
S-Plus, 46
options
echo, 310
error, 314
warn, 310
options setting, see par
order, 113, 117, 122
order of operations, 76, 77
ordered, 261, 269, 396
ordered factor, see factor, ordered
outer, 318, 319
output device, see graphics device
overparameterization,
see regression,
overparameterization
overwriting S-Plus objects, 78
P
pairs, 220
panel function, 187
panel.barchart, 189
panel.lmline, 186
panel.loess, 186
panel.superpose, 173
panel.xyplot, 186
par, 138, 139, 254
parallel, 157
parallel execution of batch jobs,
375
parameters
adding self-dened, 139
parentheses, see brackets, round
parse, 343, 361
paste, 316, 317, 361, 382
PDF Files, see Acrobat reader
permanency of S-Plus objects, 78
persp, 214, 220, 318, 320
pi
estimation of, 236, 246
pi, 93
picture, see graph
pie, 219
piechart, 157
plot, 129, 134, 219, 224, 233, 254,
257, 258, 260, 324
plot.design, 270, 283
plot.gam, 268, 277
plot.glm, 277
Index
439
R, 4, 417423
colors, 421
data import and export, 421
discussion group (r-help), 422
FAQ, 422
formulae in graphs, 421
Foundation, 417
graphical user interface, 421
lattice graphics, 420
Mac, 422
memory handling, 421
startup, 422
supporting, 417
Trellis, see R, lattice graphics
updates, 422
Windows, 422
R-squared, 253
random number
generation, 220
generator initialization, see
set.seed
rapply, 292293
rbind, 101, 112
read.table, 349, 352, 353, 384
reading data, 349, 354355
readline, 354
record, 349, 357
recover, 314, 315, 333
rectangle
plotting a, 145, 148
referencing entries, 84
regression
adding terms, 258
coding of variables, 273
condence intervals, 260
example, 251253
factor variables, 260261
intercept, 253, 272
logistic, see logistic regression
model output, 253, 258
overparameterization, 272
polynomial, 260, 262
predicted values, 260
simple linear, 255, 258261
standard errors, 260
440
Index
Index
start-up
customization, 402, 403
directory (Windows), 404
starting S-Plus, 7475
statistical models, 255
Statistical Sciences, Inc., 2
StatLib Server, 414415
StatSci, 2
stdev, 197, 233
stem, 195, 196, 200
stem and leaf display, see stem
step, 260, 268, 276
stepwise regression
backward elimination, 260
forward selection, 259
stop, 302, 310
storage mechanism, 407
strings
concatenate, 316
strip.default, 165
stripplot, 157
Student t distribution, 222
Student t-test, 225, 227
subscripts option for Trellis, see
Trellis, subscripts option
subsetting elements, 87
substitute, 302
sum, 86, 92, 93
summary, 195, 197, 199, 200, 203,
210, 242, 252, 253, 258, 262,
264, 266, 272, 384, 385, 397,
398
summary statistics (GUI), 27
Surv, 266, 267
survdiff, 267
survfit, 256, 266
survival analysis, 256, 265267
composite outcome, 266
Cox proportional hazards model,
267
martingales, 265
median survival time, 267
output, 266
statistical test, 267
svd, 100
441
442
Index
exercises, 181193
function arguments,
see trellis.args and
trellis.3d.args
functions, 157
graph ordering, 163
horizontal y-axis labels, 164
key, see key
layout, 154, 162
legend, see key
options, 160165
order of panels, 164
panel function, 165, 168171,
189
panel order, 164
settings, 161
generally useful, 172
strip argument, 165
subscripts option, 177180
syntax, 156158
useful settings, 172
using the GUI, 3638
trellis.3d.args, 157, 161
trellis.args, 157, 161
trellis.device, 159, 160
trellis.par.get, 161
trellis.par.set, 161
trigonometric functions, 145, 146
trunc, 91, 94, 244
truncation methods, 91, 94
type option, see graphics
U
unclass, 242
uniform distribution, 222
UNIX, 4, 128, 350
devices, 127
starting S-Plus, 74
UNIX GUI, 4756
unlist, 294
unpaste, 317
untrace, 313, 333
update, 167168, 176, 258, 259,
268, 274
V
var, 197, 200, 232, 233, 269
var.test, 228
variable
declaration, 296
variance, see var
vector
arithmetic operations, 82
concatenation, 89
creation, 79
eciency of, 83
referencing entries, 84
vi, 355
vt100, 127
W
warning, 302, 310
Weibull distribution, 222
which.inf, 234
which.na, 234
while, 304
White Book, 2
wilcox.test, 228
Wilcoxon rank sum distribution,
222
Windows, 4, 74, 75
devices, 127
start-up directory, 404
wireframe, 157, 215
wmf.graph, 387
workbench, 4, 368
working directory
creating a, 2324
write.table, 353
writing
data, 349
S-Plus output, 356
X
xyplot, 157, 181, 186, 211, 223,
224