Professional Documents
Culture Documents
www.stockcentral.com
Electronic Data:
Different Sources, Different Numbers, All Correct
by Ellis Traub
February 2008
Published by:
ICLUBcentral Inc.
1430 Massachusetts Avenue
Cambridge, MA 02138
617-491-3300
www.iclub.com
StockCentral.com is an online
community that provides high-
quality data, insightful
discussion, and intuitive
software for the people who
need it most -- individual
investors. Whether you’re
managing your own portfolio,
running an investment club, or
helping to educate your fellow
investors, StockCentral gives
you the exclusive web-based
tools and support you need:
Community
Gain insight through peer
discussion and workshops led by
industry experts in our message About the Author
board forums.
Data Ellis Traub, renowned financial author and public
Examine historical data and speaker, is one of the most respected names in
detailed financial reports using the world of fundamental investing. After
the StockCentral Data Service, embracing the methodology behind qualitative
provided by Morningstar Inc. growth analysis, he was able to turn his financial
(used by Yahoo! Finance and life around and has since devoted himself to
Forbes).
helping educate other investors.
Screening & Analysis
Identify winners for your Ellis is also the author of the popular book Take
portfolio with the StockCentral Stock and its companion software, as well as the
Screener and then generate an original developer of the Investor’s Toolkit line of
Instant Stock Analysis with Take legacy stock analysis desktop software programs.
Stock Online. His company, Inve$tware Corp., merged with
Plus, you’ll have instant access ICLUBcentral in 2003. ICLUBcentral’s online
to other smart features like an investing community at StockCentral.com
investing Event Calendar, Doug features an online version of Ellis’ original Take
Gerlach’s Blog, an Investing Stock “Instant Stock Analysis” software.
Book Store, Member Discounts,
and much more! Now retired, Ellis lives with his wife in Davie,
Florida and maintains an active speaking schedule
Start your free full-featured and online education presence.
trial today at
www.stockcentral.com
Electronic Data:
Different Sources, Different Numbers, All Correct
Ellis Traub
Today, investors using ICLUBcentral’s Investor’s Toolkit, Stock Analysis Plus, or Classic
software to analyze and select their stocks have their choice of three sources of electronic
data with which to work: NAIC’s OPS/SDS data, provided by S&P Compustat;
StockCentral’s data, provided by Hemscott; and AAII’s Stock Investor Pro data, provided
by Reuters.
These data sources also provide the data for ICLUBcentral’s desktop versions of Take
$tock (Compustat), Stock Central’s on-line version of Take $tock (Hemscott), software
products neither of which offers a choice of data sources. And their data is used in the
production of both of StockCentral’s services which provide investors with a list of
stocks suitable for consideration: the Complete Roster of Quality Companies
(Compustat), and the on-line Screener (Hemscott).
The first such data was provided by NAIC Computer Group director, Bill Thomas who,
with his wife, Rosemary, would manually type the data for 1,000 stocks from the Value
Line pages into data files that could be used in the software. There were relatively few
problems at that time because NAIC investors were accustomed to using Value Line and
appreciated not having to enter it into the software manually. There was nothing else to
compare it with, and the number of software products that could make use of it was
limited. Those files may have contained around 100 data items and were expected to
provide only the data required to fill out a Stock Selection Guide (SSG).
However, as time has passed, the number of software products using the data has grown
as has the variety of information they use. And, electronic data has become more
prevalent and competitive. The result of this increase in sources and products has caused
considerable confusion because the user can now compare the results using one source of
data with those of another. And, the results are often different—in some cases radically
different.
How can this be? If the data reported by companies is as strictly regulated as it seems to
be, and if the SEC is as demanding as we would hope them to be, shouldn’t the numbers
all be the same? And, therefore, shouldn’t the results be as well?
The answer is that the “strict” regulation by the Financial Accounting Standards Board
(FASB), the body that sets the standards for accountants and analysts with respect to
reporting financial results, provides considerable latitude in the manner in which data is
reported. And there is considerable room for interpretation between the strict boundaries
they set.
Compounding that opportunity for differences is the fact that each data provider has a
slightly different profile for its typical client; and these clients run the gamut from the
accounting profession (which is concerned with historical accuracy) to the financial
analysts (whose concern is forecasting the future). Where the provider who serves
primarily the former will diligently include every financial event in the final report, those
serving the latter will work hard to exclude from the mainstream reporting those values
that are not likely to recur on a regular basis in the future. The former type of client wants
to see what actually happened, while the latter is more interested in pro forma figures.
There are also differences of opinion, from provider to provider, as to what data should
and should not be considered likely to recur. And, there are also differences in which line
items from the income statement or balance sheet should comprise revenues or which
should be mapped to extraordinary or non-recurring items—each requiring different
handling according to FASB rules.
As if that isn’t enough, the data providers do not produce the files that are used by the
ultimate user. Those entities, BI, StockCentral, and AAII, have the responsibility for
selecting the required data from the huge databases to which the providers give them
access. And it is they who ultimately decide which of the data should appear in the SSG
files, or the limited databases from which screening is performed.
While there is room for error at each of these levels, most of the data we get is correct,
even when it differs from its competitor’s data. However, as you will learn, it isn’t all
correct; and not all of it is even reliable. It is therefore prudent to be alert for both the raw
data, and the results derived from that data, that stand out and appear questionable. We
strongly recommend that due diligence be exercised in all such cases and the data
challenged rather than blindly accepted at face value.
With the assistance of Joe Craig, Jim Thomas, and Dave Forgianni, we have made a
rather comprehensive study of the data from all three sources; and my purpose in writing
this report is to share with you the results of that study.
I must also thank the representatives of all three data providers: Don Barsody of
Compustat, Magdalena McMahon at Hemscott; and Greg Sedgwick of Reuters, all of
whom went out of their way to be helpful and forthcoming. Some of what we have
learned has been surprising; all of it enlightening.
The Companies
What first aroused our interest, and then provided the keen motivation for doing this
study was the fact that the Complete Roster of Quality Companies (CRQC), using the
Compustat data, differed so radically from the result using the StockCentral Screener to
generate a list of companies that were either desirable or at least acceptable. What was so
vastly different was the Quality Index (QI), a metric used in both cases to score the
quality of a company on a scale from 1 to 10.
Being certain that the algorithms in each of the screening processes was the same—one,
in fact, copied directly from the other—it could mean only that the data that produced the
QI differed sufficiently to alter the result.
Since the data items that were used to produce that index were the most significant items
in our data files, and since the calculations used to make up the components of the QI
would tend to magnify any differences in the actual data points, we determined that a
good sample of data to use for the study would be the data for those companies that were
included in either the CRQC or the StockCentral Screening product.
Typically, there would be upwards of 120 companies in either group. However,
combining the two lists, we found differences sufficient to make the list grow to more
Page 2
than 160 companies. This meant that approximately 40 companies made each list that did
not appear on the other! Another way to view this was that fully a third of the companies
on one list did not make the cut for the other. They were too small, too young,
unavailable, or the data used in the calculation of the QI was sufficiently different to
produce a poor result in one instance and a good result in the other.
This was a serious concern as it would undermine any normal user’s confidence in these
products, in the validity of those lists, and ultimately in the quality of the data that was
used for those lists. And, it was difficult to overcome a bias, since the BI data had been
used for so long by so many—having replaced the Value Line data as the “gold standard”
for most users. Thus, the StockCentral data, as the “new kid on the block,” would likely
be viewed as the data most flawed.
It was necessary, therefore, to come up with a valid, unbiased, and completely objective
way to measure the quality of each data source. This is the result of that effort.
Basis for Comparison
There are three basic areas where the data may differ:
• Static Data – Data that is reported by the company with little or no
opportunity for changes to be made by the provider. This includes such
items as revenues, pre-tax profit, and balance sheet items. The differences
that can occur in the static data include:
accuracy – typos, etc.
quantity of data – the number of years, quarters reported
whether or not the data is restated or reclassified
“freshness” – how long before it reaches the user
interpretation or definition of items – as an example, whether or
not consolidated income statements should include, as their own,
revenues from companies a majority of whose stock they own, or
just show the equity in that company as an asset.
currency reported – some ADRs are reported by some providers in
the native currency, others consistently report in US Dollars.
• Dynamic Data – Data whose inclusion is at the discretion of the
distributor, such as data from discontinued operations, extraordinary data,
number of shares applied for dilution, etc. Such choices, made by BI,
ICLUBcentral, or AAII, as to which line items should comprise the data
presented in the files, are applied consistently in each data product but
may differ from product to product and can often be found by examining
the expanded data of each.
• Subjective Data – Data about which the providers’ analysts’ opinions
may differ as to which line items are non-recurring (infrequent or
unusual), which are extraordinary (infrequent and unusual), and how that
data affects pre-tax profit, net available to the shareholders, and ultimately
earnings per share. These are usually applied consistently by each
electronic data provider but may differ between them and can also be
spotted by examining the expanded data of each.
Page 3
The data files we use have a common layout. That is to say, the SSG file for each
company consists of 537 individual data items or fields, listed in the same order (See
Exhibit A). For example, the first ten fields of each file contain ten years of high prices
beginning with the oldest and ending with the most recent year. The second ten fields
contain ten years of the lowest prices in the same order.
The original files contained, perhaps, 100+ fields, since they were required to provide the
data for the SSG alone. Now, with more products doing more things such as portfolio
management and balance sheet analysis, the number and variety of data items that
comprise these files has grown to where it is today. So there is a great deal more
opportunity for data differences to occur.
We have concerned ourselves with only the data that is significant for arriving at a
conclusion about the quality of a company as defined by our common methodology, since
this is the most important determination we make. When George Nicholson bestowed
upon us his wonderful gift—this simple approach for analyzing companies—the criteria
he proposed evaluating were essentially the strength and stability of the growth of sales,
pre-tax profit, and earnings per share, and the trend in profit margins and return on
equity. This was based upon the important distinction he made between the need to study
just the results a company’s management achieved, versus the common, intimidating
misconception that one had to also look at all of the tools management used to achieve
those results. This makes our job much easier.
As it happens, the Quality Index, a metric produced by the Take $tock software,
evaluates just those items. We have therefore focused on differences in the Quality Index
produced by the various data sources to unearth the differences in data we have analyzed.
And, we have selected for our sample only those companies surviving screening by both
the engine producing the “Complete Roster of Quality Companies” from BI’s data, and
those surviving screening by StockCentral’s “Screener” which produced either a
satisfactory or a desirable Quality Index.
Note that we have not attempted to fine-tune those differences beyond the basic
categories: desirable (green), acceptable (yellow), unacceptable (red), or not available
(black). The last category includes data that is not included in the database, data that
indicates that a company is too small or too young to be of interest, and companies that
might be in the database but which are identified by different ticker symbols.
And, we have not considered as “different” basic data items whose values depart from
those of their counterparts by less than 5 percent.
The Method
At first, we simply imported the data files from BI, StockCentral, and AAII into a
spreadsheet, side-by-side and visually inspected them, highlighting those fields that
differed from one or the other, or both by more than 5 percent. It soon became apparent
that the time we would consume in visually analyzing all the differences for 160
companies would be interminable, and that the differences were greatly magnified
because of the timing. Because we used the data from around the time that most
companies were reporting the end of their fiscal years, there were too many differences
Page 4
caused by the fact that some companies had already reported new data, and others had
not.
While we waited for the providers to catch up with each other, we built a mechanism
using Microsoft’s Excel that passes each provider’s data through an identical process by
which the Quality Index is calculated, detects and measures all of the comparative
parameters, and produces a grade for each data file for each company. It then weighs and
aggregates the results, and produces a grade for each provider. And it points to most of
the anomalies in each file so we can reasonably quickly answer the question, “Which was
incorrect and why?” And a new data sample can be entered and the run completed in
about an hour.
The files are graded on the following parameters:
Quantity and Timeliness of Data
Number of Data Years – Ideally, the user would have ten years of data to work
with. Some companies have not reported that many years. Providers are graded
according to the number of years below the maximum reported by any provider.
Freshness of Data (Annual) – Some differences between data results arise due to
the difference in policy regarding the speed with which the data is published.
Providers lose points when their data is a year behind the timeliest.
Freshness of Data (Quarterly) – Essentially the same as annual data except
points are taken for each quarter behind. Preliminary data is rewarded; but the
provider is penalized when preliminary data is not complete.
Peer Agreement (Raw Data)
Sales Each provider’s data is compared with
Pretax Profit the data from the other two sources. The
Earnings per Share number of data points that differ by more
than 5 percent from either of their peers
are counted against each. When all agree,
each receives a 10.
Peer Agreement (Derived and Calculated Data)
EPS R-Squared (7-yr.) The items that comprise the Quality
Relevant Historical Sales Growth portion of Take $tock are roughly
Relevant Historical EPS Growth compared. Using certain thresholds, each
Recent (TTM) Sales Growth is assigned a color identifying it as
Recent (TTM) EPS Growth “Desirable,” “Acceptable,”
Profit Margin Trends “Unacceptable,” or “Other” (missing, too
Return on Equity Strength young, or too small). Each is compared
with the best of the three results and is
penalized 5 points for each color step it is
away from the highest.
Overall Grade
Each of the above parameters is weighted and combined into a final grade.
Page 5
Overall Grade
Each of the above parameters is weighted and combined into a final grade.
The accompanying exhibits should be self-explanatory after reading what has been
explained above:
Page 6
Exhibit A: SSG File Layout
#### #### #### 1019 1076 1066 785 1070 935 1565 1035 1480 1355 765 1290 1560 995 1455 1335 705 1300 1610 1140 1595 1685 1150 1635
73.0 100.0 100.0 10.0 6.0 9.0 10.0 5.0 9.0 10.0 6.0 9.0 10.0 5.0 9.0 10.0 7.0 9.0 10.0 6.0 9.0
4.398 5.934 5.09 8.86 5.81 8.34 7.56 4.22 7.14 8.8 5.57 8.22 7.5 3.92 7.38 9.04 6.14 8.89 9.43 6.39 9.13
Companies are selected for study from those surviving screening for Hemscott Compustat Reuters
the Quality Index as defined by Take $tock using both BI and StockCentral. StockCentral BI AAII
Data. Reuters data is then accessed for those companies. Data originated: Aug. 31, 2007 Aug. 31, 2007 Aug. 03, 2007
Companies qualifying for the Roster 118 133 109
Companies found to be "Desirable" 51 61 70
Companies found to be "Acceptable" 67 72 39
Companies found to be "Unacceptable" 29 13 36
Companies found "Too Small," "Too Young," or both 15 5 6
Companies for which symbol was not in database 17 28 28
Total companies for which data was available 162 151 151
Data for three sources utilized by BI, StockCentral investors has been evaluated on the basis of
thirteen parameters and graded on each. The results, conformed to a scale of 1 to 10, appear below:
1) Quantity and Timeliness of Data
Number of Data Years - Ideally, the user would have ten years
of data to work with. Some companies have not reported that
many years. Providers are graded according to the number of
years below the maximum reported by any provider.
Freshness of Data (Fiscal Year) - Some differences between
data results arise due to the difference in policy regarding the
speed with which the data is published. Providers lose points
when their data is a year behind the timeliest.
Freshness of Data (Quarterly) - Essentially same as annual This (7/1) data was sufficiently beyond mid-year
data except points are taken for each quarter behind. Preliminary closings to have shown little difference between
data is rewarded; but the provider is penalized when preliminary providers. Earlier samples show greater disparity.
data Is not complete.
Number of Data Years 7.0 7.0 8.0
Freshness (Fiscal Year) 1.0 1.0 0.0
Freshness (Quarterly) 0.0 0.0 0.0
2) Peer Agreement (Raw Data)
Each provider's data is compared with the data from the other
two sources. The number of data points that differ more than 5%
from either of their peers are counted against each. When all
agree, each receives a 10.
Sales 6.2 6.6 6.5
Pre-tax Profit 5.1 4.9 5.1
Earnings per Share 4.8 4.9 5.0
5) Peer Agreement (Derived and Calculated Data)
The items that comprise the Quality portion of Take $tock are
roughly compared. Using certain thresholds, each is assigned a
color identifying it as "Desirable," "Acceptable," "Unacceptable,"
or "Other" (missing, too young, or too small). Each is compared
with the best of the three results and is penalized 5 points for
each color step it is away from the highest.
EPS R-Squared (7-yr) 4.4 6.0 5.2
Relevant Historical Sales Growth 8.7 5.8 8.3
Relevant Historical EPS Growth 7.6 4.3 7.2
Recent (TTM) Sales Growth 8.7 5.6 8.1
Recent (TTM) EPS Growth 7.5 3.9 7.3
Profit Margin Trends 9.0 6.4 8.9
ROE Strength 9.4 6.4 9.1
6) Overall Grade
Each of the above parameters is weighted and combined into a
final grade. 79.4 62.7 78.7
1 1 O'REILLY AUTOMOTIVE, INC. ORLY
1 1 0963 - Retail (Specialty Non-Apparel) (09 - Services) 9/23/07
1 1
OPS Hemscott Reuters
## 1
FY end: Dec 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006
OPS 316.4 616.3 754.1 890.4 1,092.1 1,312.5 1,511.8 1,721.2 2,045.3 2,283.2
Sales
Hemscott 316.4 616.3 754.1 890.4 1,092.1 1,312.5 1,511.8 1,721.2 2,045.3 2,283.2
Reuters 316.4 616.3 754.1 890.4 1,092.1 1,312.5 1,511.8 1,721.2 2,045.3 2,283.2
0.00079 0.00079 0.00079
OPS 37.6 49.9 73.0 83.2 106.7 131.0 160.0 187.7 251.1 282.3
PTP
Hemscott 37.4 50.0 73.0 83.1 106.7 131.0 160.1 187.7 251.2 282.2
Reuters 37.6 49.9 73.0 83.2 106.7 131.0 160.0 187.7 251.1 282.3
OPS 0.27 0.36 0.46 0.50 0.63 0.77 0.92 1.06 1.39 1.55
EPS
Hemscott 0.27 0.36 0.45 0.50 0.63 0.76 0.92 1.06 1.45 1.55
Reuters 0.27 0.36 0.46 0.50 0.63 0.76 0.92 1.06 1.45 1.55
OPS 84.0 85.0 102.0 103.0 106.0 107.0 109.0 111.0 112.0 114.0
Shrs.
Hemscott 84.0 85.0 102.0 103.0 106.0 107.0 109.0 111.0 113.0 115.0
Reuters 84.2 85.0 97.3 102.3 104.2 106.2 107.8 110.0 111.6 113.3
OPS 11.9% 8.1% 9.7% 9.3% 9.8% 10.0% 10.6% 10.9% 12.3% 12.4%
Quality Ind %PTP
Hemscott 11.8% 8.1% 9.7% 9.3% 9.8% 10.0% 10.6% 10.9% 12.3% 12.4%
Reuters 11.9% 8.1% 9.7% 9.3% 9.8% 10.0% 10.6% 10.9% 12.3% 12.4%
0.017016 0.016987 0.0169898
Price 5yr Ago Revenue EPS R-Sq Hist.Sls Gr. Hist.EPS Gr. TTM Sls Gr. TTM EPS Gr. Prof. Mgn/Dif ROE Qual Index
OPS 18.625 2,283.2 99.0% 11.6% 11.5% 10.3% 9.3% 12.4% / 1.7% 15.2% 5.3
Hemscott 19.10 2,283.2 99.0% 11.6% 6.8% 10.3% 4.9% 12.4% / 1.7% 15.2% 2.6
Reuters 18.625 2,283.2 99.0% 11.6% 6.8% 10.3% 5.0% 12.4% / 1.7% 15.1% 2.6
Dates Alligned Sep-05 Dec-05 Mar-06 Jun-06 SALES Sep-06 Dec-06 Mar-07 Jun-07 (Q2 2007)
OPS 542.9 515.0 536.5 591.2 597.1 558.3 613.1 643.1 10.3%
Quarterly Data
Hemscott 542.9 515.0 536.5 591.2 597.1 558.3 613.1 643.1 10.3%
Reuters 542.9 515.0 536.5 591.2 597.1 558.3 613.1 643.1 10.3%
EPS
OPS 0.37 0.35 0.35 0.43 0.42 0.35 0.42 0.45 9.3%
Hemscott 0.43 0.35 0.35 0.43 0.42 0.35 0.42 0.45 4.9%
Reuters 0.43 0.35 0.35 0.43 0.42 0.35 0.42 0.45 5.0%
COMMENTS:
What caused the QI to be okay for BI result and not for the HEMS or Reuters? In the quarter ending 9/30/05,
Compustat recognized as non-recurring approximately $6.1 Million by which the provision for income taxes was
reduced as a result of resolving a matter for the taxes for which they had previously set aside money. Eliminating
that non-recurring, non-cash, increase in funds, reduces earnings by about 5 cents per share. That single quarter
difference knocked ORLY out of the roster as a result of the slight difference in growth rate between 2005 and 2006.
Award the kudos to Compustat this time for digging that information out of the footnotes and applying it.
1 1 JOS. A. BANK CLOTHIERS, INC. JOSB
1 1 0945 - Retail (Apparel) (09 - Services) 9/23/07
1 1
OPS Hemscott Reuters
## 1
FY end: Jan 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006
OPS 172.2 187.2 193.5 206.3 211.0 243.4 299.7 372.5 464.6 546.4
Sales
Hemscott 172.2 187.2 193.5 206.3 211.0 243.4 299.7 372.5 464.6 546.4
Reuters 172.2 187.2 193.5 206.3 211.0 243.4 299.7 372.5 464.6 546.4
0.00111 0.00111 0.00111
OPS 4.1 7.4 5.3 8.0 10.3 18.6 28.9 40.4 60.0 72.2
PTP
Hemscott 4.1 7.4 5.3 8.4 10.7 18.6 28.9 40.3 60.1 72.2
Reuters 4.1 7.4 5.3 8.0 10.3 18.5 28.4 40.4 60.0 72.2
OPS 0.15 0.36 0.20 0.34 0.46 0.66 0.96 1.38 1.95 2.32
EPS
Hemscott 0.16 0.37 0.21 0.36 0.48 0.66 0.96 1.38 1.96 2.36
Reuters 0.15 0.36 0.20 0.34 0.46 0.66 0.94 1.38 1.96 2.36
OPS 16.0 16.0 16.0 14.0 14.0 15.0 16.0 17.0 17.0 18.0
Shrs.
Hemscott 16.0 16.0 16.0 15.0 15.0 17.0 17.0 18.0 18.0 18.0
Reuters 15.9 15.9 15.9 14.4 14.0 14.4 15.3 16.7 17.0 18.0
OPS 2.4% 4.0% 2.8% 3.9% 4.9% 7.7% 9.6% 10.8% 12.9% 13.2%
Quality Ind %PTP
Hemscott 2.4% 4.0% 2.8% 4.1% 5.1% 7.7% 9.6% 10.8% 12.9% 13.2%
Reuters 2.4% 4.0% 2.8% 3.9% 4.9% 7.6% 9.5% 10.8% 12.9% 13.2%
0.029024 0.028979 0.0294378
Price 5yr Ago Revenue EPS R-Sq Hist.Sls Gr. Hist.EPS Gr. TTM Sls Gr. TTM EPS Gr. Prof. Mgn/Dif ROE Qual Index
OPS 11.665 546.4 99.0% 13.8% 18.9% 16.7% 29.2% 13.2% / 2.9% 26.1% 6.8
Hemscott 11.30 546.4 100.0% 13.8% 20.5% 16.7% 30.9% 13.2% / 2.9% 26.5% 6.8
Reuters 11.665 546.4 99.0% 13.8% 20.5% -2.8% 9.6% 13.2% / 2.9% 26.1% 2.6
Dates Alligned Jul-05 Oct-05 Jan-06 Apr-06 SALES Jul-06 Oct-06 Jan-07 Apr-07 (Q1 2007)
OPS 98.6 105.6 163.8 113.7 119.1 119.5 194.1 129.5 16.7%
Quarterly Data
Hemscott 98.6 105.6 163.8 113.7 119.1 119.5 194.1 129.5 16.7%
Reuters 195.2 105.6 163.8 113.7 119.1 119.5 194.1 129.5 -2.8%
EPS
OPS 0.30 0.26 1.02 0.32 0.38 0.27 1.35 0.45 29.2%
Hemscott 0.30 0.26 1.03 0.32 0.38 0.30 1.36 0.46 30.9%
Reuters 0.67 0.26 1.02 0.32 0.38 0.30 1.36 0.46 9.6%
COMMENTS:
Here, Reuters has a typo. For the second quarter of 2005 (ending July 31st) the data entry clerk entered the six
months values for sales and earnings instead of the quarterly information. The inflated values for 2005 reduced the
apparent growth rate for the trailing twelve months from that year to the next—enough to knock JOSB out of the
ballpark for someone using Reuters information for her study.