Professional Documents
Culture Documents
SG24-5499-00
SG24-5499-00
International Technical Support Organization A Design and Implementation Guide for Tivoli Decision Support
October 1999
Take Note! Before using this information and the product it supports, be sure to read the general information in Appendix D, Special notices on page 191.
First Edition (October 1999) This edition applies to Tivoli Framework Version 3.6.1, Tivoli Enterprise Console Version 3.6.1, Tivoli Distributed Monitoring Version 3.6.1, Tivoli Service Desk Version 5.02, Tivoli Decision Support Version 2.0 for use with the AIX Version 4.3 and Windows NT 4.0 Operating Systems Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. OSJB Building 003 Internal Zip 2834 11400 Burnet Road Austin, Texas 78758-3493 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 1999. All rights reserved. Note to U.S Government Users Documentation related to restricted rights Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xi Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 From fire fighting to business intelligence . . . . . . . . . . . . . . . . . . . . 1.2 The desired solution for business information . . . . . . . . . . . . . . . . . 1.3 Decision Support Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Positioning Tivoli Decision Support in the decision making process 1.5 Our approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 2. Tivoli Decision Support general overview . 2.1 Overview of Tivoli Decision Support . . . . . . . . . . . . . . 2.2 Tivoli Decision Support product components . . . . . . . 2.2.1 Tivoli Decision Support Discovery Administrator . 2.2.2 Tivoli Decision Support Server component . . . . . 2.2.3 Tivoli Decision Support Discovery Interface . . . . 2.2.4 Cognos PowerPlay . . . . . . . . . . . . . . . . . . . . . . . 2.2.5 Crystal Reports. . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.6 Tivoli Decision Support Discovery Guides . . . . . 2.3 Tivoli Decision Support implementation modes . . . . . 2.4 Supported platforms . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Concepts and terminology . . . . . . . . . . . . . . . . . . . . . 2.6 How Tivoli Decision Support works. . . . . . . . . . . . . . . 2.7 Who is making use of Tivoli Decision Support? . . . . . Chapter 3. Methodology . . . . . . . . . . . . . 3.1 Tivoli Implementation Methodology . . 3.2 Implementing Tivoli Decision Support. 3.2.1 Requirements gathering phase . . 3.2.2 Systems analysis phase . . . . . . . 3.2.3 Project planning phase . . . . . . . . 3.2.4 Deployment phase . . . . . . . . . . . 3.2.5 Testing phase . . . . . . . . . . . . . . . 3.2.6 Documentation phase . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 .1 .3 .4 .5 .7
. .9 . .9 . 10 . 11 . 12 . 12 . 12 . 12 . 12 . 14 . 15 . 16 . 19 . 21 . 23 . 23 . 25 . 25 . 30 . 40 . 47 . 52 . 54
iii
4.2 Integrating Decision Support with Tivoli Enterprise applications . 4.3 Tivoli Decision Support components. . . . . . . . . . . . . . . . . . . . . . 4.4 Integrating Tivoli Decision Support components . . . . . . . . . . . . . 4.4.1 The cube-building process . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 The Discovery Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Stand-alone vs. network architecture . . . . . . . . . . . . . . . . . . . . . 4.6 Suggested architecture and design solutions . . . . . . . . . . . . . . . 4.6.1 Tivoli Service Desk environment - Case study . . . . . . . . . . 4.6.2 Single TMR environment - Case study . . . . . . . . . . . . . . . . 4.6.3 Multiple TMR environment - Case study . . . . . . . . . . . . . . . 4.7 Troubleshooting tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 ODBC source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.2 Cube building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.3 Discovery interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. 60 . 61 . 63 . 64 . 68 . 70 . 72 . 72 . 74 . 77 . 81 . 81 . 81 . 82
Chapter 5. Case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.2.1 Requirements gathering phase . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.2.2 Systems analysis and design . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.2.3 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3 The existing Tivoli environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.1 Tivoli general architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.2 TMR servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.3.3 Endpoint gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.3.4 TEC server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.3.5 RDBMS and RIM hosts configuration . . . . . . . . . . . . . . . . . . . . . 90 5.3.6 Tivoli DM and monitors for performance and capacity trend data 90 5.4 Identifying the reports requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.4.1 Customer reporting requirements . . . . . . . . . . . . . . . . . . . . . . . . 93 5.4.2 The SDC actual solution for reporting . . . . . . . . . . . . . . . . . . . . . 93 5.4.3 The Reports of the NCO account . . . . . . . . . . . . . . . . . . . . . . . . 97 5.5 Customer objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.6 Mapping Tivoli Decision Support Discovery Guides . . . . . . . . . . . . . 114 5.6.1 Detailed reports mapping workshop . . . . . . . . . . . . . . . . . . . . . 114 5.7 Tivoli Decision Support reports and business information . . . . . . . . . 116 5.7.1 Server Performance Prediction Guide. . . . . . . . . . . . . . . . . . . . 116 5.7.2 Event Management Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5.7.3 Domino Management Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5.7.4 Network Element Performance Guide . . . . . . . . . . . . . . . . . . . . 137 5.8 Suggested architecture and solution design . . . . . . . . . . . . . . . . . . . 140 5.9 Tivoli Decision Support deployment process . . . . . . . . . . . . . . . . . . 144 5.9.1 Hardware and Software prerequisites installation . . . . . . . . . . . 145
iv
5.9.2 Installation of the Tivoli Decision Support server components. 5.9.3 Installation of the Tivoli Decision Support Administrator . . . . . 5.9.4 Installation of the Tivoli Decision Support client components . 5.9.5 Deploying TDS for server performance prediction . . . . . . . . . . 5.9.6 Deploying the Event Management Guide . . . . . . . . . . . . . . . . 5.10 Future reporting requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.1 Additional reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.2 Additional recommended TDS Discovery Guides . . . . . . . . . Chapter 6. Reports and decision information usage 6.1 The scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 The roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 The systems analyst role . . . . . . . . . . . . . . . . 6.2.2 The IT manager role . . . . . . . . . . . . . . . . . . . . 6.2.3 The Chief Executive Officer role . . . . . . . . . . . 6.3 The discovery process . . . . . . . . . . . . . . . . . . . . . . 6.3.1 The system analyst discovery process . . . . . . 6.3.2 IT manager discovery process . . . . . . . . . . . . 6.3.3 CEOs discovery process . . . . . . . . . . . . . . . . 6.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . .
. 145 . 153 . 154 . 154 . 161 . 162 . 162 . 163 . 167 . 168 . 168 . 168 . 169 . 169 . 169 . 169 . 177 . 181 . 183
Appendix A. Tivoli Implementation Methodology (TIM) 3.6. . . . . . . . . 185 A.1 Target market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 A.2 Customer profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 A.3 The top three things to remember. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 A.4 What is new with TIM? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 A.5 What is unique? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 A.6 Where can I find information on TIM? . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Appendix B. Tivoli Decision Support customer support . . . . . . . . . . . 187 B.1 The support process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Appendix C. Tivoli Decision Support Discovery Guides availability . 189 Appendix D. Special notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Appendix E. Related publications . . . . . . . . . . . . . . . . . . . . . . . E.1 International Technical Support Organization publications. . . . E.2 Redbooks on CD-ROM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.3 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... ...... ...... ...... . . . . 195 195 196 196
How to get ITSO redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 IBM Redbook fax order form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
vi
Figures
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. The evolution to business intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 The challenge of a better solution for business information. . . . . . . . . . . . . 4 TDS in the decision making process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Tivoli Decision Support components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Tivoli Decision Support in stand alone implementation mode . . . . . . . . . . 14 Tivoli Decision Support network implementation mode . . . . . . . . . . . . . . . 15 The operation of Tivoli Decision Support . . . . . . . . . . . . . . . . . . . . . . . . . . 20 TIM schematic overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Requirements gathering process flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Systems Analysis process flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Typical architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 File server information example form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 TDS administrator PC information example form . . . . . . . . . . . . . . . . . . . 38 TDS client PC information example form. . . . . . . . . . . . . . . . . . . . . . . . . . 38 Database server information example form . . . . . . . . . . . . . . . . . . . . . . . . 39 Network information example form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Project planning process flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Sample project plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Deployment process flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Testing process flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Documentation process flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Tivoli Decision Support functionality diagram . . . . . . . . . . . . . . . . . . . . . . 60 Decision Support components integration . . . . . . . . . . . . . . . . . . . . . . . . . 63 Cube-building process - Step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Cube-building process - Step 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Cube-building process - Step 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Cube-building process - Step 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Viewing the multidimensional reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Viewing Crystal Reports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 TDS in stand-alone mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Network installation architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Tivoli Service Desk environment case study . . . . . . . . . . . . . . . . . . . . . . . 73 Single TMR environment case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Single TMR environment with Tivoli Decision Support . . . . . . . . . . . . . . . 76 Multiple TMR environment case study. . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Multiple TMR environment with Tivoli Decision Support . . . . . . . . . . . . . . 79 Service Delivery Center - West architecture . . . . . . . . . . . . . . . . . . . . . . . 88 Tivoli Distributed Monitoring object relationships. . . . . . . . . . . . . . . . . . . . 92 The Problem for reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 The in-house process for reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
vii
41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83.
The SRM method for reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 In-house performance and capacity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Detailed report - CPU utilization by server. . . . . . . . . . . . . . . . . . . . . . . . . 99 Detailed report - process memory and paging utilization by server . . . . . 100 Detailed report - network I/O utilization . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Detailed report - DASD usage by server . . . . . . . . . . . . . . . . . . . . . . . . . 102 Percentage availability by server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Detailed alert summary by server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Lotus Notes - Monthly mail server statistics report . . . . . . . . . . . . . . . . . 104 Lotus Notes - Monthly database server report. . . . . . . . . . . . . . . . . . . . . 105 Lotus Notes - Daily mail hub report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Lotus Notes - Daily MTA server report. . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Lotus Notes - Hourly response time report . . . . . . . . . . . . . . . . . . . . . . . 107 Lotus Notes - Hourly concurrent users report . . . . . . . . . . . . . . . . . . . . . 107 Lotus Notes - Hourly sessions-per-minute report . . . . . . . . . . . . . . . . . . 108 Lotus Notes - Hourly mail box size report . . . . . . . . . . . . . . . . . . . . . . . . 108 Lotus Notes - Hourly SMTP transferred messages report . . . . . . . . . . . . 109 AIX servers - CPU utilization reports . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 AIX servers - Hard disk and file systems utilization report . . . . . . . . . . . 111 AIX servers - Account summary report . . . . . . . . . . . . . . . . . . . . . . . . . . 111 NT servers - CPU, memory, and disk utilization report. . . . . . . . . . . . . . 112 NT servers - Account Summary Report . . . . . . . . . . . . . . . . . . . . . . . . . 113 All System Metrics report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 CPU utilization by server report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Memory utilization report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Network I/O utilization report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 CPU utilization memory page rates by operating system . . . . . . . . . . . . 122 Summary report by operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 CPU average forecast by system purpose . . . . . . . . . . . . . . . . . . . . . . . 124 Under-provisioned/Over-provisioned servers report . . . . . . . . . . . . . . . . 125 SLA statistics by event class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Which events take the longest to fix? report . . . . . . . . . . . . . . . . . . . . . . 127 Event source volume by hour report . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Domino network traffic report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Domino server statistics - Mail routed by server report . . . . . . . . . . . . . . 131 Domino statistics - Total KB transferred report . . . . . . . . . . . . . . . . . . . . 132 Domino statistics - Number of users report . . . . . . . . . . . . . . . . . . . . . . . 133 Domino statistics - Mail average delivery time report . . . . . . . . . . . . . . . 134 Domino statistics - Replication statistics report . . . . . . . . . . . . . . . . . . . . 135 Domino statistics - Server average delivery time by hour report . . . . . . . 136 Domino statistics - Mail box file size by server report . . . . . . . . . . . . . . . 137 Network Element Performance Guide - Cisco CPU utilization report . . . 138 Network Element Performance Guide - Name server speed by hour . . . 139
viii
84. Network Element Performance Guide-Top ten nodes by transition count 140 85. Recommended architecture in network mode . . . . . . . . . . . . . . . . . . . . . 141 86. The update procedure first script - transfer.cmd . . . . . . . . . . . . . . . . . . . 147 87. The update procedure second script - copycubes.cmd . . . . . . . . . . . . . . 148 88. The Transfer_Cubes task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 89. Defining the Transfer_Cubes task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 90. The Transfer_Cubes job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 91. Defining the Transfer_Cubes job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 92. The Copy_Cubes task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 93. Defining the Transfer_Cubes task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 94. The Copy_Cubes job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 95. Defining the Transfer_Cubes job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 96. Scheduling the jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 97. Scheduled jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 98. Lotus Notes mail servers by CPU utilization . . . . . . . . . . . . . . . . . . . . . . 171 99. Lotus Notes Mail Servers daily average run length cue. . . . . . . . . . . . . . 172 100.Lotus Notes mail servers by memory utilization . . . . . . . . . . . . . . . . . . . 173 101.Lotus Notes mail servers that need more memory . . . . . . . . . . . . . . . . . 174 102.Lotus Notes mail servers by network utilization . . . . . . . . . . . . . . . . . . . 175 103.Lotus Notes mail server - forecasted average mail delivery time . . . . . . 176 104.Under-provisioned and over-provisioned Notes servers . . . . . . . . . . . . . 178 105.Performance anomalies by server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 106.Lotus Notes server approaching critical thresholds. . . . . . . . . . . . . . . . . 180 107.Lotus Notes server daily average performance trends . . . . . . . . . . . . . . 182
ix
Tables
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. Requirements gathering phase items . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Systems analysis phase items. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Minimum configuration table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Project planning phase items. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 TDS workshop summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Deployment phase items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 TDS deployment guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Testing phase items. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Documentation phase items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 The TDS Discovery Guides mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Detailed mapping reference table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Macro procedure for deploying TDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Minimum hardware requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 TDS file server deployment steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 SPP Discovery Guide installation steps. . . . . . . . . . . . . . . . . . . . . . . . . . 154 DM Roll-up installation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Event Management Discovery Guide installation steps. . . . . . . . . . . . . . 161 Future requirements reference table . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 TDS Discovery Guides general availability . . . . . . . . . . . . . . . . . . . . . . . 189
xi
xii
Preface
Deploying a Tivoli Decision Support solution requires careful planning and includes numerous activities. The primary objective of this redbook is to describe the methodology used to deploy and migrate from the current reporting tools to Tivoli Decision Support by using an IBM service delivery center as a case study. In addition, we will describe how decision makers with different roles and responsibilities can benefit from Tivoli Decision Support and make better decisions by simulating typical problems in IT business. This redbook is targeted at the technical professional responsible for migrating from the current reporting tools used in his or her organization to Tivoli Decision Support and will be available as a reference book upon the deployment of Tivoli Decision Support. This redbook is a valuable addition to existing product documentation and is aimed at both architects and implementors of enterprise systems management solutions. This redbook should be read in conjunction with the product documentation, which complements some of the concepts explained in this book.
xiii
He is currently working on deploying Tivoli enterprise solutions for several IBM customers in Brazil. Dave Hulse is an Advisory IT Specialist working at IBM Global Services Johannesburg, South Africa. He has over 20 years of experience in the IT industry. He has been with IBM for 18 months and, during that time, was project leader responsible for the design and deployment of the largest Tivoli implementation in Southern Africa. His areas of expertise include designing customer IT solutions, and he has extensive experience in the field of systems management. Rakesh Parshotam is an Advisory IT Specialist working as a Tivoli Architect at IBM Global Services in South Africa. He holds a degree in Computer Science and is a Certified Tivoli Consultant and Microsoft Certified Systems Engineer. He has been working with Tivoli for the past three years and has held various positions including Technical Team Leader for major Tivoli systems management deployment projects in South Africa. The team would like to express special thanks to Ling Tai, Senior Software Engineer working for Tivoli in Raleigh, for her major contribution to this book. Thanks also go to the following people for their invaluable contributions to this project: Kim Querner Tivoli Systems, Austin Bill Meloling Tivoli Systems, Raleigh Lisa Chaves, Axel Elfner IBM, Tucson Shawn Eldridge, Douglas Fuzie Tivoli Systems, Indianapolis Temi Rose International Technical Support Organization, Austin Center Milos Radosavljevic International Technical Support Organization, Austin Center
xiv
Comments welcome
Your comments are important to us! We want our redbooks to be as helpful as possible. Send us your comments about this or other redbooks in one of the following ways: Fax the evaluation form found in ITSO redbook evaluation on page 209 to the fax number shown on the form. Use the online evaluation form found at http://www.redbooks.ibm.com/ Send your comments in an internet note to redbook@us.ibm.com
xv
xvi
Chapter 1. Introduction
This redbook was written with the input and experience of many people, and the result is a suggested approach that may apply directly to your situation or can be a guide to anyone involved in implementing Tivoli Decision Support in a large-scale environment. Enterprises usually have some reporting tools that assist in the performance of daily tasks. Very often, these tools are neither well-integrated with the business of the enterprise nor are they able, for example, to provide predictable information about growth or change. In addition, these tools generally do not provide a good and easy way of interpreting information that helps to make better decisions because they are normally designed for and used by technical people (who often make the interpretation of information as easy as reading and understanding hieroglyphics). Decision making often requires the analysis of large amounts of data, complex relationships, and abstract correlations. Decision support systems usually help in the evaluation of consequences (the what if) of given decisions and may advise which decisions are best for achieving particular goals. We will move towards a real scenario explaining the methodology used, the architecture and design considerations, and all phases of deployment of the Tivoli solution for the decision-making process. Furthermore, we will simulate a typical problem showing how decision makers with different roles and responsibilities can benefit from the business information provided by Tivoli Decision Support in order to make more efficient decisions. We do not explain the product details of Tivoli applications in this book, but we assume that the reader is reasonably familiar with Tivoli architecture and Tivoli applications. We have dedicated Chapter 2, Tivoli Decision Support general overview on page 9 to providing the reader with a brief introduction to Tivoli Decision Support.
across the entire enterprise as e-Business spawns more and more devices that are connected to it. A few years ago, it was sufficient for service providers (IT Departments) to manage and plan business operations using monthly batch reports. Changes in the organizations took a long time to be implemented. After that, with the implementation of some management tools, we were able to execute queries from the historical operational data in order to get reports or charts that only allowed us to be reactive to problems. Today, enterprises need to provide decision makers with fast and easy access to information that reflects the constant changes in the environment. Decision makers and customers need to have access to tools that provide them with the ability to identify trends and model relationships in the data to find behavioral anomalies in the business environments.
IT Investments
fa ti s o cti n
Proactive Inter Management Platform
Cu
st
om
er
Sa
Managed Environment Monitoring & Defined Process Reactive Few Defined Process
Business Drivers
Figure 1. The evolution to business intelligence
Large amounts of data are stored in your enterprise. This data contains precious information about the way the enterprise does business, its process, and customers. In these competitive days, using the knowledge provided by
this data in order to make strategic business decisions can often move the enterprise ahead of the competition. Business intelligence is what we are describing in this section; It is the ability to be proactive to problems, leverage the assets in our business to gain profit from available data, and provide the know-how needed to make well-informed decisions for our business. As e-business drives the need for more network devices, we will need technologies that enable us to manage resources across the entire enterprise, end-to-end, not only by exception but by predictive analysis as well. One such technology is Tivoli Decision Support, soon people will ask: How did we manage without it?
Introduction
Monitor A
Monitor B
. . . .
Monitor Z
As shown in Figure 2, the challenge is to find a complete solution that provides an integrated framework or platform offering multiple management functions across multiple vendor applications, services, or devices across the entire enterprise collecting the data and storing it in a standard format that can be processed and transformed into meaningful business information. Attempts to provide this capability are not new. One such solution is Tivoli, which offers centralized policy-based functions, such as user management, software distribution, and services accessible to third party vendors.
an application, this person may be a manager, engineer, or operator) and an expert who may that persons supervisor. Decisions are made within a Decision Making Process (DMP), which, in situations that justify the use of a DSS, is a complex sequence of tasks. We assume that a final decision is to be made by the Decision Maker, and a Decision Support System does not serve as a replacement or control of a DM. In other words, a DSS is not aimed at the automatic selection of decisions. The following are characteristics of a DSS: Systems that facilitate/extend knowledge management capabilities. Systems that coordinate distributed decision making. Systems that offer advice, expectations, facts, analyses, and so on. The user interface of a DSS is designed in such a way that a DM may obtain, from the DSS, information and answers for questions that she or he considers to be important for a DMP. DSS are interactive by nature. Even though a DSS might be unable to solve a problem facing DM, it can be used to stimulate the DMs thoughts about the problems. The following DSS definition is the one that can better explain the class of DSS we will work with: DSS DSS is a supportive tool for the management and processing of large amounts of information and logical relations that helps a DM extend his or her habitual domain and, thus, helps him or her reach a better decision. In other words, a DSS can be considered a tool that, when under the full control of a DM, performs the difficult tasks of data processing and provides relevant information that enables a DM to concentrate on this part of the DMP.
Introduction
timely manner, windows of opportunity can close, and business is usually not done. Tivoli Decision Support as a technology is best known for dynamically providing the decision maker with interactive business indicators and then allowing the user to look at those indicators from many different perspectives. For example, let us suppose that a product manager wants to know how well the product is being supported in South America this month and compare the rates with the same month the previous year. Once she or he views the high-level report, she or he may drill into the region to only look at how Brazil is doing. Moreover, she or he may drill into the southeast region and look at how a particular city is doing. This technology is called On-line Analytical Processing (OLAP) or multidimensional analysis. In addition, Tivoli Decision Support allows us to benefit from information collected by the customers Tivoli solution. For example, the support centers collect a large quantity of transactional data from their customers, which contains valuable information about the way they interact with the business. With Tivoli Decision Support, Decision Makers have a way to manage this data and convert it into useful information providing a way to evaluate and identify trends, to gain insight into the way customers do business, and to make better decisions.
Tivoli Applications
Decision
RDBMS
As shown in Figure 3 on page 6, Tivoli Management Applications, such as Enterprise Console, Service Desk, Inventory, Distributed Monitoring, and so on collect the data and store it in databases. Tivoli Decision Support selects management data from these databases, performs calculations, and adds value to the data by managing the natural relationship in the data. At this point, only business relevant information is offered to the Decision Maker who is in charge of the decision. With Tivoli Decision Support, Tivoli Enterprise solution has moved a step forward to reach the desired solution for Business Decision Information providing the ability to: Measure the effectiveness of your operation Gain insight into the potential satisfaction level of customers Gain insight into the value of your customer relationships Further leverage your investment in technology and automation Identify areas of weakness to convert from reactive activities to proactive planning Discover patterns that influence your decision making and future planning Become more efficient and effective Gain control over your business faster
Introduction
Deployment phase Testing phase Documentation phase Chapter 4, TDS architecture and design considerations on page 59 In this chapter, we will describe some considerations for the Tivoli Decision Support architecture/topology/design solution, such as: How Tivoli Decision Support integrates with the Tivoli architecture How the Tivoli Decision Support components integration works Tivoli Decision Support architectures based on case studies environments Troubleshooting tips Chapter 5, Case study on page 85 This chapter exercises the knowledge acquired from the previous chapters performing an example by exploring one of the IBM Service Delivery Centers as a customer presenting a structured Tivoli Decision Support deployment solution. Chapter 6, Reports and decision information usage on page 167 This chapter demonstrates how Tivoli Decision Support can support the decision making process by describing a simple scenario and outlining the steps used to find and analyze critical data in order to make a well-informed decision.
collected can be shared with others in the organization using delivery mechanisms including hard copy printouts, files, and push content. In the latter case, content that has been collected by one user can be sent to a central repository on a companys intranet from which other users can gain access to the content.
10
The following sections are dedicated to explaining each of the Tivoli Decision Support components. For further information, refer to the product documentation.
11
It enables you to set parameters that are specific to your enterprises operation. These parameters, such as severity level thresholds and business hours, determine how Tivoli Decision Support interprets data and makes calculations when generating views.
12
contains a number of views that are associated with the data elements being examined. Tivoli Decision Support uses the Discovery Guides to aid in discovering key information. With this information, Tivoli Decision Support becomes a powerful end-user solution. This solution provides users with a comprehensive set of views into their enterprises data. Along with the views are methods (including algorithms, queries, and reports) for abstracting key business indicators from the business data. Managers can use these indicators as key business information to improve efficiency, performance, and profitability. TDS includes several Discovery Guides. Other Guides are available as additional options. Appendix C, Tivoli Decision Support Discovery Guides availability on page 189, offers a complete list of the Tivoli Decision Support Discovery Guides including those that are shipped with TDS Version 2.0. TDS Discovery Guides contain algorithms, queries, reports, views, and business models that best represent a business concept. Guides can be very robust and contain several hundred views and multiple business models. Along with the views, guides have embedded contextual information associated with views. The context helps users identify, discover, and understand what the view has to offer. For example, as the user views and interprets data in Tivoli Decision Support, the Tivoli Discovery Interface provides several features to facilitate the user. These features include hints, related views, and keyword searches. No customization, analysis, or programming is required to use Tivoli Decision Support guides. By selecting guides in the Discovery Interface, managers can define the scope of their data searches to yield the most relevant results for their needs. A call center manager, for instance, may want to see only data that pertains to his or her area of the business. He or she may not need to review data that another department manager needs to review. It is only necessary to activate all relevant TDS Discovery Guides and turn off all other guides. The views shown in the Tivoli Discovery Interface topic map are only those associated with the Call Center Management Discovery Guide. Managers can select as many Guides as they want to expand the scope of their data search, for instance, if the call center manager wants to review not only relevant call center data, but also data collected about the health of his or her business contacts. The call center manager selects the Call Center Management Discovery guide as well as the Relationship Management Discovery Guide. Now, the views available to the call center manager in the
13
topic map are a combination of the two guides he or she selected. The call center managers scope has changed to encompass more views.
INVENTORY
ODBC Connection
DM
TEC
DATABASES
Network mode Only TDS Discovery Interface and Cognos PowerPlay are installed on the client machine. In addition, the client requires the installation of both the Client Database and the ODBC Driver, in order to have access to the data stored in the database server, when generating Crystal reports. The TDS Version 2.0 installation CD contains the Intersolv 3.01 32-bit ODBC Driver for Oracle and Sybase. The TDS Server component is installed on a file server which the client machines have access to a shared drive that will contain all the cubes generated. A separate machine is used as the administrator system where the Discovery Administrator module is installed along with PowerPlay. The
14
Administrator system should access the shared drive in the TDS File Server. In addition, it also requires the database client and the ODBC driver in order to have a connection to the database from which the cubes are built. The cube files created by the Discovery Administrator are stored on the TDS File server. The diagram in Figure 6 on page 15 shows an example of a Tivoli Decision Support in the network mode:
Crystal Reports
Crystal Reports
Database Server
15
16
Dimensions A dimension refers to a broad grouping of descriptive data about an aspect of a business, such as software products and dates. Each dimension includes categories in one or more drill-down paths and an optional set of special categories. See also drill-up and drill-down. Dimension line The dimension line shows the dimensions in the cube and the categories within each dimension currently being examined. Drill-up and Drill-down Drill-up refers to looking at data in a progressively more general way whereas drill-down refers to looking at data in a progressively more detailed way. Drill-through Drill-through is more detailed than drill-down. While drill-down stops at the lower level of consolidated data, drill-through goes one step further by looking at the actual data records themselves. For example, if the breakup of the types of problems resolved by a particular analyst is the lowest level of consolidation, drill-through looks at the actual records that correspond to the problem descriptions themselves. Filter A filter is a means of ensuring that a data search yields the most relevant results. In Tivoli Decision Support, the user can specify data selection criteria, such as data ranges or severity levels, that restrict the data search and result in only relevant data. Layer Layer is the third set of dimension categories, along with rows and columns, that you can add to the views in TDS. Layers offer details for another dimension to provide a new perspective on your views. A view can contain several layers, but you can look at only one at a time. Measures Measures refer to indicators you use to gauge the performance of your organization. For example, measures can be the number of problem requests received and the average time taken to resolve a problem.
17
Models A model contains definitions of queries, dimensions, and measures as well as objects for one or more cubes that Cognos Transformer creates via the TDS Discovery Administrator for viewing and reporting in PowerPlay. Profile A profile is a feature in the Discovery Interface that enables each user to configure settings and views that pertain only to him or her. The Discovery Interface can contain several profiles. Related view A related view is a view that is different from the current view but may contain additional relevant data. Tivoli Decision Support automatically suggests views that are related to the current view. These additional views are listed in the Related Views tab in the hint pane. Role A role is a user-selected description that describes the users position in all areas of the business. A user can select one or more roles based on the scope of his or her position. By specifying one or more roles, the user establishes the scope of the information contained on the topic map. The more roles are specified, the greater the scope of the data searches displayed on the topic map. Selection criteria Selection criteria are the parameters specified by the user when conducting a data search. Selection criteria act as filters ensuring that only relevant data is yielded by a data search. See also Filter. Slicing and Dicing Slicing and Dicing refers to the process of extracting information for viewing from the cube file by selecting different dimensions. This process can be thought of as constructing a multi-dimensional space by using the selected dimensions for its constituent axes or as looking at the same data from a variety of angles. Topic A topic is a subcategory of data in the Tivoli Decision Support topic map. Within each category of enterprise data, data is subdivided into related
18
topics. Within each topic, the user can choose an individual type of data for viewing. See also Category and View. Topic Map Topic map is the users primary means of navigating Tivoli Decision Support. In the topic map, the user can choose specific categories, topics, and views. When a view is selected, a specially-configured view appears in the view pane. See also View. View A view is the most detailed type of question in the topic map. A view provides the user with an outlook of the data stored in the cube file or the data retrieved from a special database query.
19
Crystal Reports
Because of its integration with PowerPlay and Crystal Reports, Tivoli Decision Support provides snapshot-style views of data that are displayed in the Discovery Interface. The data can be viewed in one of several formats: as text, a bar chart, a line chart, or some other graphical format. The default format depends on the type of data you are viewing, but you can select different formats for some types of views. These views allow you to: Analyze data from different perspectives Compare current activities to historical records Spot trends Troubleshoot Evaluate resource allocation Make projections and forecasts Perform other management tasks The Discovery Interface also provides features that can automate your search for data. For example, you can use bookmarks to collect your favorite views; so, they are instantly available. Instead of manually browsing for data, you can use the Search tool to find information based on keywords. The
20
Discovery Interfaces History feature tracks your most recently used views; so, you can quickly return to them.
21
push-delivery feature enables all users to receive updated views but is particularly helpful to the One-minute manager.
22
Chapter 3. Methodology
This chapter provides the reader with the recommended methodology that they should conform to in order to successfully plan, install, and configure the Tivoli Decision Support product. This chapter kicks off with an overview of the Tivoli Implementation Methodology (TIM), which has been branded as Tivolis best practice for identifying, designing, planning, installing, testing, and documenting a Tivoli Enterprise solution. Following the introduction to TIM, a recommended Tivoli Decision Support deployment process incorporating the structured procedure driven by the implementation methodology that TIM is based on will be presented to the reader.
23
refined by Tivoli Professional Services and selected Business Partners. TIM is organized according to the Software Engineering Life Cycle model. This model addresses each element that is critical to the implementation of any software development activity. In Figure 8, a schematic overview of TIM is given.
TIM provides standard verified methods for use by project managers and Tivoli-certified consultants to execute each phase of a Tivoli implementation. With this common deployment strategy, Tivoli and Tivolis business partners can provide: Accurate and complete requirements definitions for Tivoli solutions
24
Efficient requirements analysis to generate an architecture and design for a solution Complete project planning and detailed design for a solution Accelerated deployment of Tivoli solutions Detailed solution verification that lends itself to customer regression test activities Completed solution documentation that can be used by the customer, consultants, Tivoli support staff, and Tivoli development to ensure the long-term success of that solution To find information on TIM, refer to the instructions in Appendix A, Tivoli Implementation Methodology (TIM) 3.6 on page 185.
Methodology
25
A system architect to analyze the information and successfully create a System Architecture and Design document The consultant, business operations, project management, sales team, and the deployment team to work together, led by business operations, to develop a technical proposal A deployment project team to create a detailed project plan Table 1 shows the input and output items, which highlight the components of the requirements-gathering phase:
Table 1. Requirements gathering phase items
INPUT
OUTPUT
We will now take a look at the questionnaires shown in Figure 9. 3.2.1.1 Questionnaires Questionnaires serve as the main tool for gathering a detailed description and logical picture of the customers environment. It is a tool that portrays the customers environment and high-level goals for reporting on his or her IT environment. Gathering the customer-specific TDS requirements is the focal point of this exercise and will be investigated shortly. First, however, we must take a look at the systems management requirements of the customer in the form of the Customer Requirements Questionnaire.
26
3.2.1.2 Initial customer requirements questionnaire The information gathered in this report serves as the single input element of the implementation solution. Our aim is to process this information and produce other questionnaires and reports that will serve as inputs to the subsequent phases of the implementation cycle.
In this questionnaire, many business-specific questions are answered. The most important and valuable pieces of information gathered would be the customer reporting goals and issues. For example, the answers to the following types of questions will be available to us: What are the immediate Tivoli-specific goals of the customer? What are the long-term Tivoli-specific goals of the customer? What are the customer's general immediate goals with TDS? What are the customer's general long-term goals with TDS? From this information, the TDS consultant will be able to identify the reporting solution components that are significant to the customers business. He or she can now focus on the implementation of the product functions of Tivoli Decision Support and gather all the necessary TDS requirements.
3.2.1.3 Tivoli Decision Support requirements questionnaire Having manipulated the information received above, we are now ready to draw up a questionnaire to extract the TDS product-specific information. The TDS provider will set up an interview with the customer requesting the relevant business leaders as well as the IT technical leader to be present. This interview is broken up into four steps, which are shown in the list below. The purpose, requirements, and process of each step will be clearly distinguished; furthermore, a set of suggested questions for the questionnaire will be proposed.
1. Decision-making/reporting requirements overview: Purpose It is during this step that Tivoli personnel acquire their initial information on the customers reporting requirements. It is assumed that the customer has an amateur solution in place and intends to migrate to Tivoli Decision Support. The customer will be questioned on his or her current reporting activities, and a document detailing his or her requirements will be drawn up. Required information Details of what the customer expects from the Tivoli Decision Support solution, current process, procedure, service levels, reports, and any
Methodology
27
product(s) associated with accomplishing their current reporting task (if any) are required information. Process Document all customer expectations and reporting goals. Gather all existing reporting policies and procedures implemented by the customer. Review the customer's existing reporting policies and procedures to determine how data is collected and manipulated and what Tivoli Decision Support specifics need to be presented in order to migrate to this new reporting strategy. Suggested questions: What are the customers immediate report requirements? What report requirements are forecast as future needs? What is your current reporting strategy, and what are the shortfalls? Are you able to publish content to the Web? How often do you run your data mining and database interrogation procedures? Are any Tivoli products used to gather data for the current reporting solution?
Note
It is in the analysis phase that we will investigate how these existing policies and procedures can be migrated to Tivoli Decision Support
2. Existing Tivoli systems management products installed: Purpose Tivoli Decision Support is dependent on various Tivoli products to perform its reporting task. It is assumed that the customer has either an existing Tivoli Systems Management solution implemented, or it is in the process of implementation in their environment. Required information Tivoli Servers, Tivoli Products (including patch levels), installed Plus modules, TMR architecture, and Operating System platforms are required. Process
28
Gather all existing architecture and deployment documents (if available). Identify process flows, and systems management procedures that the business is running with. Identify if Tivoli Decision Support dependent Tivoli products are installed for example, Distributed Monitoring, Enterprise Console, Netview, Service Desk etc. Suggested questions: Which systems management disciplines does your Tivoli solution integrate with? Describe the details of that integration including the flow of data and desktop configurations. Are there current architecture and deployment summary documents that describe the Tivoli deployment? If using a Master-Spoke TMR, where are the Spoke TMR Servers located? (For example, central data center, different geographic locations, one per branch office). 3. Determine hardware and operating system information For this step, we basically need to get an idea of the existing hardware that is deployed at the customer site. We will use this information in the Systems Analysis to decide if it is possible to share the roles of some of these machines with Tivoli Decision Support. Purpose The purpose is to gather a hardware and operating system inventory of all dedicated Systems Management machines as well as machines that need to be monitored. Required information System-specific information is required. Process Review the Tivoli Decision Support hardware requirements with the customer. Processor, memory, monitor, and hard disk space of the existing hardware are some of the main issues that need to be covered. 4. Determine network-specific information Purpose The purpose is to gather the following information to describe the network communication mechanisms used between the various components that may be used for the TDS deployment. Required information
Methodology
29
The customer should provide a network topology diagram. If the following information is not on the diagram, annotate the diagram or provide details: Line speed of each network connection. Actual network bandwidth. If it varies by time of day, define the typical averages. Each firewall between nodes. Each firewall's configuration, monitoring, and policies. Socks configuration description. Protocols used within the current environment. Frame Relay (Committed Information Rate (CIR) and Burst Rate). Suggested questions: Are all systems reachable via TCP/IP? Describe the host and IP address naming conventions and scheme used to identify networking and computer system equipment. Is the TCP/IP routing structure static or dynamic? If DNS is used, provide a copy of the DNS map configuration. (Note that the integrity of these maps must be verified.) Describe how reverse lookups are performed. If DHCP or WINS is in use, identify the server and describe how these utilities are configured.
30
The input and output items shown in Table 2 highlight the components of the systems analysis phase:
Table 2. Systems analysis phase items
INPUT
OUTPUT
3.2.2.1 Preparing for systems analysis To create a TDS architecture, begin by translating the customer requirements and the proposals into a list of what functions and capabilities TDS must provide. Document this information so that it can be used to implement the technical solution.
As requirements are reviewed, carefully evaluate each customer requirement to ensure they have provided sufficient information to enable the analyst to meet these requirements using Tivoli Decision Support. Also, if the customer accepted an action item to provide requirements information during the requirements gathering phase, ensure that the customer has supplied this information. Once all the preparations have been completed, we can now be certain that focusing on customer requirements to create an architecture and design will ensure that the architecture and design is concise, and, thus, the deployment of the TDS solution will be successful.
Methodology
31
We will begin with an analysis of the information received and then go on to explain the documentation or results that must emerge as a consequence of this analysis phase.
3.2.2.2 Tivoli Decision Support Guides A Decision Support Guide is a TDS module that groups the enterprise data into specialized categories. Each category contains a series of topics that correspond to the different aspects of that category. Each topic contains a number of views that are associated with the data elements being examined.
The following list provides some examples of Decision Support Guides that are available: Tivoli Decision Support for Server Performance Prediction This provides capacity planning, forecasting, and trend analysis and identifies server performance issues using data from Tivoli Distributed Monitoring. Tivoli Decision Support for Event Management This uses information from the Tivoli Enterprise Console to provide an understanding of event handling versus service level agreements. Tivoli Decision Support for Network Event Analysis This leverages data captured by Tivoli NetView to indicate the performance of network devices, the state of network health, and the control of network event management. Tivoli Decision Support for Software Deployment Analysis This helps identify issues that impact software deployment using Tivoli Software Distribution and Tivoli Inventory. Tivoli Decision Support for Information Management This enables customers to analyze problem and change management information stored in Tivoli Service Desk for OS/390 host databases.
3.2.2.3 Working with Tivoli Decision Support Guides As soon as the questionnaires from the requirements gathering phase are ready, they will be used during this next phase to derive a TDS reporting solution that meets the customers requirements. A TDS analyst will study the reporting and decision information requirements from the questionnaires and he or she will then carry out an exercise to deliver a proposal to the customer on how we plan to meet their requirements with the use of TDS and TDS Discovery Guides.
This exercise is a two-fold process and is outlined in the following list:
32
1. Mapping TDS guides to customer requirements The analyst needs to evaluate the various TDS guides available. He or she will need to implement a one-to-one mapping between each required report and TDS view, which is made possible by one of many TDS guides. The analyst will need to have detailed information available about the TDS Guides. This information can be obtained from various sources: Using Decision 2.0 Support Guides, GC32-0290 Release Notes Shipped with the TDS Discovery Guides Tivoli Website (Overview of Guides capabilities) In most situations, TDS, with the use of its Drill-Down capabilities, will be able to produce much more detailed information than is required by the customer. On the other hand, it is possible that some of the customer reporting requirements will not be provided by TDS. For this reason, there should be a second step where a solution for these unmatched reports is developed 2. Determine customization requirements (if necessary) For this process, the analyst will compare the list of items required by the customer (or collected by the current reporting tools), to the list of items collected by the TDS Guides. Reports that are required by the customer and are not available to TDS are listed down. A sub-project for delivering these reports will now need to be kicked off. This project will investigate the extent of work required and will produce a customized solution for the outstanding reports. Customization can include but is not limited to: Editing current guides Writing custom scripts to populate the DM database Creating Custom Crystal reports Creating Custom PowerPlay reports For detailed information, refer to Using Decision 2.0 Support Guides, GC32-0290, which is shipped with the product.
3.2.2.4 System architecture and design document The objectives of this process are to design and document the physical and logical aspects of the customers TDS deployment.
The following tasks need to be completed by this activity: Design a high-level architecture for TDS layout (Physical Design). Define logical architecture for the TDS implementation (Logical Design).
Methodology
33
Identify and acquire the servers required for implementing Decision Support (Resource requirements). Document the system architecture and design. (Output Documentation). The TDS architecture Chapter 4, TDS architecture and design considerations on page 59 is dedicated to designing solutions and focuses a great deal on various architecture considerations. For this reason, we will not go into too much detail here on the TDS physical design. This architecture section will introduce some TDS architecture concerns and present to the reader a standard design on which the customers solution should be based. The foundation of the physical design depends on the customer requirements and the answers to the questionnaires presented to the customer in Section 3.2.1, Requirements gathering phase on page 25. The analysts will need to study the customers environment and business with the intention of identifying suitable TDS hardware components and personnel resources. They will then need to bring these two resources together and apply a process to deliver a suitable architecture. Insights into some of the thought processes that willcome into play are listed below: Does my customer require a stand-alone or network installation? How many TDS Servers does my customer need? How many administrator machines will be required? How many client interfaces will be required? Does the customer require Crystal Reports to be installed? Below is a recommended network configuration diagram. The figure depicts a typical TDS implementation in Network Mode. In Network Mode, only Decision Supports client component and PowerPlay are installed on the client machine. The other components are installed elsewhere. The user of the client system only has access to the Discovery Interface. You must administer Decision Support for all clients from the system administrators system.
34
ODBC
INVENTORY
DM
TEC
DATABASES
Client system Client system running Discovery running Discovery Interface Interface
File server running server components including guides, cubes and models
As mentioned before, Chapter 4, TDS architecture and design considerations on page 59, will go into the details of the physical and logical design considerations of a TDS implementation. Resource availability Early in the process of developing a system architecture and design, a rough determination as to what hardware might be required for deployment is made. It is important to generate this information as soon as possible so that hardware can be procured and made available as soon as hands-on deployment work begins. The system requirements for Tivoli Decision Support may vary greatly and will depend on many environmental factors. For the purpose of simplifying this exercise, we will assume that a networked topology of clients fed by a file server will be the standard workgroup configuration. With that in mind, cube build times and view run time will vary based on the following: Number of datapoints included in the scope of the analysis Performance of the database server Demand on the database server Throughput of the network Performance of the Client
Methodology
35
Performance of the File Server Performance of the processor that builds the cubes The Tivoli Decision Support Installation Guide, GC32-0289, lists the operating system and suggested hardware requirements for each Decision Support component. It differentiates between two classes of machines: a low-end and a high-end machine for each component. Although this serves as a good base, it is often not easy to rank the type of environment that you are in, and it makes the choice very difficult. Table 3 details the minimum configuration for various business environments. The sizings are based on Tivolis experience with the Service Desk product line. This table provides you with an easier method of deciding on the configuration that you require. The business environments are divided into four ranks: Small, Medium, Large, and Mega. The variable that we are interested in is the number of contacts, which gives us an idea of how large the business is.
Table 3. Minimum configuration table
Size of Center Small <10K contacts <50K calls/yr Medium 10-30K contacts 50-100K calls/yr Large 30-250K contacts 100-500K calls/yr Mega >250K contacts >500K calls/yr
Client PC Requirements
40 MB disk space 100 MHz Pentium 32 MB RAM NT 4.0/Win 95 40 MB disk space 100 MHz Pentium 48 MB RAM NT 4.0/Win 95 40 MB disk space 100 MHz Pentium 64 MB RAM NT 4.0/Win95 40 MB disk space 100 MHz Pentium 64 MB RAM NT 4.0/Win 95
Administrator PC Requirements
500 MB disk space 200 MHz Pentium 64 MB RAM NT 4.0/Win95 500 MB disk space 250 MHz Pentium 128 MB RAM NT 4.0 700 MB disk space 300 MHz Pentium 256 MB RAM NT 4.0 1 GB disk space 300 MHz Dual Pentium 512 MB RAM NT 4.0
The best way to gather the TDS resource mapping information is through a survey sheet that should be filled in by the customer with help from the systems analysts. The survey should be structured in such a way that
36
emphasis is placed on every machine identified as playing a specific role in the TDS architecture. What follows is a description of the machine roles and the accompanying survey sheets. File server information This is the repository for TDS and contains the TDS models, templates, queries, and other information required to generate views for the Discovery Interface. It is necessary for this to be reasonably fast to service the files. Every client machine will connect to this server and, thus, the network connection must be reasonably fast. These factors need to be investigated at this point. Figure 12 shows an example file server information form:
Network protocol used between server and workstations (Novell, NT, etc.): _______________________________________________________ Is this file server dedicated to TDS file storage? Amount of disc space available for TDS files? ___(300+MB recommended) Do all TDS client and admin machines have READ and WRITE access to the TDS file service/directory?
Figure 12. File server information example form
TDS Administrator PC information This component provides functions for TDS configuration and administration, for example, setting system parameters that control the behavior of TDS. This machine requires a fast hard drive and fast network access to the database server. Figure 13 on page 38 shows an example TDS administrator PC information form.
Methodology
37
Example Form
Is this PC dedicated to Tivoli Decision Support or is it used for other applications? If it is used for other applications, what are they? Machine Type: ________________________ Operating System: _____________________ Version: ___________
Has a drive letter to the server components been mapped on this machine? If yes, what is the drive letter? ______________
Figure 13. TDS administrator PC information example form
TDS Client PC information ODBC connection to the database server, sufficient disk space, and shared access to the file server are some of the main criteria that need to be looked at in this step. Figure 14 shows an example TDS client PC information form:
Example Form
Machine Type: ___________________________________________ Operating System: _______________________ Version: _______
Free disk space: _________________________ RAM: __________ Number of client workstations for installation: __________________ Number of classroom workstations for installation: ______________
Figure 14. TDS client PC information example form
Database server information Identify the machine that will host and the type of RDBMS that will retain the respective configuration repositories. The following information is required to set up Tivolis RDBMS Interface Module: Database Vendor Database ID Database Home Directory Database Server ID User name
38
Instance Home Directory (DB2 only) For database management purposes, identify what the customer wants to do with the data collected in the configuration repositories; this information will assist in engineering the database by determining the following: Structure of the Database Size of the Database Required Queries by users Customized Database tasks, scripts, and reports Database clean-up requirements (when and how often) Figure 15 shows an example database server information form:
Example Form
Is this a dedicated database server or is it used for other applications? If it is used for other applications, what are they? ___________________________________________________________ RDBMS: _____________________________ Version: ______________ Is your database backup hot or cold?__________________________ Day(s) and time(s) of database backup __________________________
Figure 15. Database server information example form
Note
In the example form shown in Figure 15, Hot refers to a backup where the database remains in use while the data is backed up. Cold refers to logging off all users, stopping all database activity, and then backing up the data. A cube build will fail during a cold backup of the data source or if a Tivoli Decision Support multidimensional view is open. Cube builds should be done during business off-hours. Network information Since TDS is comprised of three directly-related components functioning on different machines on the network, it is important that all network shortfalls be identified. Figure 16 on page 40 shows an example network information form.
Methodology
39
Example Form
Network protocol used between Database Server and Application Server, Workstations and Servers (i.e. TCP/IP, IPX-SPX, etc.): ________________ Do you use network Login scripts to set application path statements?
Figure 16. Network information example form
40
INPUT
Training Plan
OUTPUT
3.2.3.1 Project analysis At this point in the process, the sales team and the deployment team have established a strong working relationship with the customer and have completed a number of tasks.
Project analysis involves reviewing the results of these efforts. Data available for review includes: Technical Proposal Cost Proposal Statement of Work Results of all requirements gathering activities System Architecture and Design Document For the details of the review and verification process, refer to the Tivoli Implementation Methodology. To find information on how to access TIM, refer to the instructions outlined in Appendix A, Tivoli Implementation Methodology (TIM) 3.6 on page 185.
3.2.3.2 The TDS project plan After the analysis is complete, the project manager and the consultant services team consolidates the results of this activity and develops a detailed project plan for the customer engagement.
The project plan is a collection of the following information: TDS Solution Objectives Project task plan Project Team Phasing Plan Project Change Control Plan Risk Assessment and Mitigation Plan
Methodology
41
TDS Training Plan Status Reporting Plan The deliverables in the preceding list represent a high-level listing of what is expected of this TDS project plan. The details of each deliverable are clearly explained in TIM and should be referenced directly from TIM. It is however, important that we discuss some of the Decision Support details surrounding some of these core deliverables, namely, the solution objectives, project task plan, staff phasing plan, and training. Decision support solution objectives A plainly-written document called the Solution Objectives is written by the project manager that used to establish the scope of the project. The TDS Solution Objectives document summarizes the results of the initial project review documented in the project analysis and is created to assist with project task plan development. Elements of the TDS Solution Objective include the following: The TDS solution to be developed and the estimated completion date The customers business and information technology goals for the project The number of hours allocated to the consultant team to deliver this product A description of the functionality and limitations of the solution High-level tasks required to develop the TDS solution Assignment of high-level tasks to the customer or consultant organization Completion criteria established for each high-level task and for the project Key assumptions and risks associated with the project. Decision Support project task plan The project manager and the services consultant team consolidate the results of the Analysis and develop a project task plan for the customer engagement. The goal of the project task plan is to identify all project tasks, the duration of each task, and the individual responsible for performing each task. The project task plan also presents this data graphically. This plan is used to set customer expectations for consultant services engagements and to track progress and control the scope of these engagements. In line with the above-mentioned goal, the project task plan is supposed to do the following: Identify the TDS version and patches to be deployed
42
List all prerequisite tasks that must be performed in preparation for each deployment task Define the schedule for the customer to provide consultant services facility access and office materials at the customer location Require that all outstanding requirements gathering or system analysis activities are complete or that tasks are documented in the project task plan to perform this work Define the TDS environment configuration and administrative tasks Define the date that the required hardware should be acquired Detail the configuration process of TDS hardware Identify work to be completed for each task in the project task plan Address each reporting requirement identified by the customer Identify each task required to implement the TDS architecture Identify all configuration and customization tasks not defined by the TDS architecture Identify each task necessary to perform system testing Identify each task necessary to generate deployment documentation Identify all management and administrative tasks essential to the success of the project Assign each task to an individual Specify the duration of each task For your reference, Figure 18 on page 44 shows a snapshot of a project task plan.
Methodology
43
3.2.3.3 Project team phasing plan The formation of the Implementation Project Team is critical to the success of any project. The roles of the essential team members required for Tivoli Decision Support implementation projects are detailed in the following sections. Keep in mind that one team member may fulfill multiple roles during the implementation project.
Implementation project leader The Implementation project leader organizes the efforts of the team. His or her responsibilities include project management, direction setting, resource scheduling, and project acceptance. It is crucial that the customer provide an Implementation project leader. This role is typically filled by someone with authority to sign off and accept completion of contracted work. The project leader must be available to verify and accept any work completed by the
44
implementation consultants. This person will be required to sign the acceptance document as the various project phases are completed. System administrator The responsibilities of the Tivoli Decision Support system administrator include long-term model administration, parameter administration, configuration changes, usage policies, and so on. If the reports analyst position detailed below is not filled, the system administrator assumes the reports analysts responsibilities. Management team The Management team provides the sponsorship necessary to successfully implement the products within the business units of the company. Their responsibilities will be to aid the team in marketing the application to other business areas, to provide the resources and funding needed for the implementation, and to remove organizational roadblocks. Participation is heaviest during the planning phase of the implementation but continues throughout the implementation process. Tivoli product system administrators Due to the integral nature of Tivoli's products, it may be important for the Tivoli system administrators (for those products for which TDS Discovery Guides have been purchased) to be on the Tivoli Decision Support deployment team. Responsibilities include describing any customizations to the product databases and determining the overall management objectives for the Tivoli Decision Support implementation. Participation is highest during planning and the first phase of implementation. Network administrator Someone familiar with the corporate network configuration is the best choice for this role. Responsibilities will be to aid the team in technology decisions and in the installation of software and the setup of user permissions to the directory structures of the applications. Participation is heaviest during the planning phase and first phase of the implementation. Database administrator The Database Administrator's participation is critical to the success of the implementation. Responsibilities will include team participation in technology decisions and in optimization of the database prior to deployment. Participation is heaviest during application setup, but will be ongoing as DBA issues arise.
Methodology
45
Web administrator If your organization would like to take advantage of Tivoli Decision Support's ability to push views to a Web server, you will need a Web administrator. He or she will be responsible for the Web servers administration in your enterprise and will make sure that all the users have access to the reports stored on the Web servers. Reports analyst (optional) The best choice to fill this role is someone who has experience with a structured programming language and software development and who has experience with Crystal Reports. Responsibilities include completing cube customizations as defined during the implementation (This role can be filled by a customer resource if available, or by a Tivoli Systems resource). Trainer (optional) The TDS trainer will be responsible for attending all TDS Workshops and for training TDS administrators and users after the deployment.
3.2.3.4 Decision Support training plan During the services engagement, customer personnel working with Tivoli certified consultants will gain some informal knowledge of the Tivoli Decision Support software.
Additional formal training, however, is necessary for all customer personnel involved with the development, implementation, testing, transition, administration, or operation of the TDS solution. The consultant project manager works with the customer project manager to identify the training needed by each customer staff member and helps develop a training plan based on the individual's responsibilities, the necessary skills, and available training courses. At the time this redbook was published, Tivoli developed a series of workshops to deliver this necessary training. The training covers the entire deployment process starting from Installation and Configuration to the use of the Discover Interface. Table 5 highlights the content and value of these workshops:
Table 5. TDS workshop summary
Workshops
Application setup, Configuration, and Administration workshop
Description
Installation and configuration of the TDS components. TDS Administration options, cube builds, build schedules.
46
Workshops
Advanced administration Customization workshop End-user workshop
Description
TDS usability, configuring profiles, publishing views, adding components to the topic map Modifying calculations, adding flex fields, creating new cube and report templates Use of the Discovery Interface including drill-through and adding dimensions
Note
Formal Tivoli training is offered by the Tivoli Worldwide Education department. Information about Tivoli Customer Education is available at
http://www.tivoli.com/services/education/
Deployment Phase
Tivoli deployment guides Initial customer requirements questionnaire Input Tivoli Decision Support requirements questionnaires Technical proposal System Architecture and Design Document Detailed project task plan Output Tivoli TDS solution, ready for testing
Methodology
47
INPUT
Preparation
TDS Training
OUTPUT
For most Tivoli product deployments, Tivoli management software deployment guides are provided to the technical consultant to assist with all installation, configuration, customization, automation, and maintenance tasks. These guides contain a wealth of information by providing in-line tips and hints and Web-based links to online documentation, technical papers, and useful Web sites. The important topics of a TDS deployment guide are shown in Table 7:
Table 7. TDS deployment guide
Reference Type
Product Version, Shipped Manuals, Installation Patch Number /Patch References Tivoli External Web URL Pointers White paper References, Managed View Articles, URL Pointers
Subject
TDS Information Release Notes
Information Pointer
Available on TDS CD http://www.support.tivoli.com/Prodman/html/AB.html http://www.support.tivoli.com/Prodman/html/RN.html
Patches
http://www.support.tivoli.com/patches/ ftp://ftp.tivoli.com/support/
http://www.tivoli.com/products/index/decision_support/ TDS Information White Papers Managed View magazine Press Release
http://www.tivoli.com/products/documents/whitepapers/ http://www.wellesleyinfo.com/tmv/
http://www.tivoli.com/teamtivoli/press/
48
Reference Type
Subject
Discovery Administrator
Information Pointer
Refer to TDS Administrator Guide, TOC Refer to TDS User Guide, TOC Refer to TDS Installation Guide, TOC Refer to TDS Administrator Guide, TOC
Maintenance
The phases of the Deployment Segment include: Preparing for deployment Installation and customization Configuration Advanced configuration and customization Training
3.2.4.1 Preparing for deployment A few important steps need to be performed before deployment can commence.
1. Hardware prerequisites Prior to deployment, the customer should have all the necessary hardware in position and ready to be rolled out with the TDS applications. The consultant services team should have information on all the machine access passwords with appropriate permissions to make configuration changes. 2. Software prerequisites It is necessary to verify that all prerequisite software installations and network configurations have been completed prior to the deployment of TDS. Some of the items to be checked include: 32-bit database client software is installed on every TDS client and the TDS Administrator PC. 32-bit SQL database connectivity has been verified on each TDS client and the TDS Administrator PC.
Methodology
49
The shared source path (the file service where the cubes, reports, models, and administrative databases will reside) has been mapped to the TDS Administrator and client PCs. Network connectivity between the TDS Administrator PC, client PCs, and the shared source path has been successfully tested. The TDS Administrator PC and TDS Client PCs have READ and WRITE access to the shared source file server. In order to optimize SQL processing time, database maintenance and backup has been recently performed, indices have been rebuilt, inactive records purged, and appropriate data archived. 3. Gathering documentation The deployment team should have all the necessary documentation on hand. This includes, but is not limited to, the input elements identified below: Tivoli deployment guides Initial customer requirements questionnaire Tivoli Decision Support requirements questionnaires Technical Proposal System architecture and design document Detailed project task plan
3.2.4.2 Product installation This section describes the method for an installation in Network Mode. The following tasks are accomplished in this phase:
Tivoli Decision Support file server example macro-tasks Installation of the Tivoli Decision Support Server Components on the Shared Source file server. Install the TDS Discovery Guides required to meet customer requirements as established in the systems analysis Tivoli Decision Support administrator example macro-tasks Installation of the following components on the TDS Administrator PC: Tivoli Decision Support Administrator components Cognos Administrator components - Transformer, On-line Books, and Help files 32-bit Crystal Reports 6.0
50
32-bit ODBC driver (if not already installed) Tivoli Decision Support client example macro-tasks Installation of the following Tivoli Decision Support client components: Tivoli Decision Support Client Cognos Standard - PowerPlay, On-line Books, and Help 32-bit ODBC driver (if not already installed)
3.2.4.3 Product configuration The Discovery Administrator needs some configuration in order to work with the chosen guides and configured data source. The Discovery client needs to be configured for each user to enable their job-specific data and views to be available to them. The following tasks are accomplished in this phase:
TDS administrator Importing the required guides Adding the respective Data Sources for the cubes associated with the guides that have been imported Configuration of TDS Administrator options including cube parameters and date ranges, as well as other guide-specific parameters for the cubes. Review the cube build schedule and use the Tivoli Discovery Scheduler to automatically build the cubes after business hours as specified in the design document. TDS client Selecting the Guides that the user will require Setting up the appropriate user roles
3.2.4.4 Advanced configuration and customization The customer requirements not fulfilled by TDS now need to be integrated into the solution. This is a technically-intensive stage during which the following customization tasks are accomplished:
TDS calculations are modified. Flex fields are added as dimensions to cubes. Flex fields are added as parameters to Crystal reports. Terminology in PowerPlay or Crystal reports is changed. Existing reports are integrated. New Crystal and PowerPlay reports are created.
Methodology
51
3.2.4.5 Training Customer personnel involved with the development, implementation, testing, transition, administration, or operation of the TDS solution require training based on the individual's respective responsibilities.
For details on the type of training available, refer to the workshops identified in Section 3.2.3.4, Decision Support training plan on page 46.
Testing Phase
Tivoli TDS solution, ready for testing Input System architecture and design document Detailed project task plan System test plan Output Verified TDS deployment
INPUT
Preperation
OUTPUT
3.2.5.1 Preparing to test The customer should be involved with the testing of the TDS solution. This enables and encourages the customer to take an active role in the
52
deployment, thus, becoming familiar with the Tivoli solution and more easily assuming ownership of the solution. Time and resources should have been allocated to test the TDS solution in the project task plan throughout the deployment.
3.2.5.2 Testing the solution Each phase of the deployment should be followed by a functionality testing exercise. It is required that the following set of test cases be applied in order to verify the TDS solution:
Integration testing System testing Production testing Each of these test case types identifies a unique level of testing to be performed to verify the solution. This will be made clear in a moment.
Note
Every product installation and configuration entry in the project task plan must be followed by a functionality test in this segment of the deployment cycle. Integration test These test cases verify the connection of the functional elements of the TDS solution. The following items are tested: Network communication between all Decision Support Components including RDBMS Server 32 bit ODBC connection between Decision Support Administrator Server and RDBMS Server The shared source path has been mapped to the TDS Administrator and client PCs. RDBMS Server is up and running and collecting data from the respective data sources The TDS Server has been installed and configured correctly All the Discovery Clients have been installed and configured correctly All the Discovery Administrators have been installed and configured correctly
Methodology
53
System test This suite of test cases is used to verify that each element of the TDS solution executes properly and that all of the solution elements function properly in relation to one another. System testing verifies that the solution accommodates the following defined requirements: Installation and Importation of the Decision Support Guides. Data Source connections must be tested with the database. Manual population of the cubes (.mdc files) with representative sample data to verify that each cube builds properly and to validate data. Monitoring of the time it takes to build the cube and identify any network bandwidth issues. Use the Discovery Interface Client machines to test successful access to the cube and report directories on the server. Production test Production testing is the first time that the TDS solution is subjected to the production environment or a production-like environment. The intent of production testing is to verify that the solution performs properly and with acceptable performance under the load of the operational environment. Production testing can be conducted by mirroring events from the production system so that production-level testing and normal production system activities occur concurrently. Alternatively, a period of time may be specified during which the TDS solution utilizes the production system environment with concurrence from the customer that the solution is in test mode, and the results of the solution are not guaranteed. At this time, the following items require attention: Manual cube builds are completed successfully and in a suitable time period. Task Server is running and scheduling the builds on time without running into any errors. All the Administrators and Client operators display a good knowledge of the product/s that they handle.
54
Each machine, user, group, and application used by TDS is identified. This document is the complete reference available to the customer after a services engagement and enables the customer to independently maintain and operate the TDS solution. The Project Deployment Summary also serves as a reference document for a services organization to perform future enhancements to the customer's TDS solution. The input and output items shown in Table 9 highlight the components of the documentation phase:
Table 9. Documentation phase items
Documentation Phase
System architecture and design document Detailed project task plan Project deployment summary
Output
A complete and well-documented TDS deployment Figure 21 outlines the process flow of the Documentation phase:
INPUT
Preparation
OUTPUT
Implementation Discussion
Problem Determination
3.2.6.1 Preparing for documenting the deployment During the Detailed Project Planning segment and throughout the Deployment segment, elements of the Tivoli solution were documented. In support of this, the technical consultant in charge of the project plan has done the following:
Methodology
55
Allocated a consultant with an additional project librarian role Verified and retained all configuration, customization, solution administration, and solution maintenance procedures
3.2.6.2 Project deployment summary This deployment summary is necessary to document all installation, configuration, customization, automation, maintenance scripts and programs developed for the customer. Each machine, user, group and application managed by Tivoli Decision Support is identified as well as the versions of each product installed. Finally, the Deployment Summary documents administration and maintenance tasks necessary to operate the TDS solution. Properly controlled and updated, the document will provide benefit to the TDS solution developed for the customer. The document is comprised of the following topics:
System overview This section introduces the TDS solution deployed at the customer's site. Using information already created in the System Architecture and Design document, a high level overview of the customer's TDS solution is presented. Technical overview The Technical Overview section of the deployment document provides detailed configuration and customization information for: The architecture of the solution Automation and maintenance scripts and programs developed for the customer Administration and maintenance procedures This section of the deployment document is the most valuable to the customer. It documents each Tivoli, system, network, application, and database administrator task necessary to operate and maintain the customer's TDS solution. Problem determination This is the section of the deployment guide that provides problem determination and analysis information to the customer. Items addressed in this section include the following: Log and Trace file information Performance Monitoring tools Daemon status information Developed script and program error code conditions
56
Fail-over analysis Security permission failures Database failures Communications failures Discussion of implementation This is a brief project management discussion highlighting the methodologies used during deployment. This section of the deployment document includes the project team's recommendations for future enhancements of the customer's TDS solution.
Methodology
57
58
4.1 Overview
Section 4.2, Integrating Decision Support with Tivoli Enterprise applications on page 60 illustrates how and where Tivoli Decision Support fits into a Tivoli enterprise environment. Section 4.3, Tivoli Decision Support components on page 61 describes the various Decision Support components and the software with which they are composed. Section 4.4, Integrating Tivoli Decision Support components on page 63 illustrates how the components fit together, what their roles are, and their resource requirement needs. Section 4.5, Stand-alone vs. network architecture on page 70 compares the Tivoli Decision Support installation modes, Stand Alone mode and Distributed Network mode. This section explains the differences and weigh the advantages and disadvantages of these two installation options. Section 4.6, Suggested architecture and design solutions on page 72 presents a set of suggested deployment solutions to common Tivoli environments. Section 4.7, Troubleshooting tips on page 81 touches on some of the common problems experienced with Tivoli Decision Support.
59
TMR Server
Tivoli Decision Support
RDBMS
Endpoints
Figure 22. Tivoli Decision Support functionality diagram
The above diagram delineates a high-level integration between Decision Support and a typical Tivoli Enterprise Environment.
60
The Decision Support process flow is described in the numbered steps below: 1. At the bottom of the diagram, we have Tivoli Management agents (endpoints), running Tivoli products, such as Distributed Monitoring or Inventory. 2. Management Gateways consolidate the data and forward them to Tivoli Management Server. 3. Tivoli applications, such as Software Distribution, Enterprise Console, or Distributed Monitoring, continually update its database or table with data received from all the Tivoli Management agents. 4. Tivoli Decision Support accesses this data by means of an ODBC connection. 5. These data are interrogated by a set of algorithms that are defined by the Tivoli Decision Support Discovery Guides, which are available in the TDS environment. 6. Tivoli Discovery Administrator then transform this processed data into multidimensional cubes consisting of business relevant information. 7. This information is then viewed by the Decision Support Discovery Interface where the decision maker can perform a drill-down analysis.
61
Ed.mdb This file contains all the topic map data. It includes which views are in which categories and topic, and the filename for the view. It also contains the related views, view hint description, and keywords for searches. DrillThru.mdb This is created on the fly by TDS Administrator to cache the data used to create the cube. This data is then read during a drill-through operation issued from the Discovery Interface. TDS Discovery Administrator This specialized module enables you to administer Decision Support and to build and customize cubes. This module also assists in the configuration of Decision Support. It is not only required to install the Tivoli Discovery Administrator component on the Discovery Administrator machine, but also the database client component, 32-bit ODBC driver, and Cognos PowerPlay in Administrator mode. The Database Client and ODBC connection are required to access the database during the cube-building process. The kind of database client and ODBC Driver depends on the specific TDS Discovery Guide requirements. Always refer to the TDS Discovery Guide release notes in order to get specific details. The Crystal Reports runtime module is installed automatically at the time of the Discovery Administrator component installation. Note that the full installation of Crystal Reports is optional, and it is required only if the reports need to be changed. TDS Client Component This is the interface to TDS and provides all the tools needed to open and work with views of the data from your organizations enterprise databases. On this component, it is not only required to install the TDS Discovery Interface product but also Cognos PowerPlay in Standard mode, the database client, and a 32-bit ODBC Driver. The Database Client and ODBC connection are only required to access Crystal Report format data from the Repository. This depends on whether the topic structure of your respective TDS Guides contain data that uses the Crystal Report templates.
62
INVENTORY
DM
TEC
DATABASES
ODBC connection
Cube building
Network connection
Figure 23 portrays a high-level view of the components that make up Tivoli Decision Support and the integration between them. The Tivoli Discovery Administrator module, which is installed on the system administrators machine, connects to your enterprises databases through an ODBC data source connection and to the TDS File server through a network connection. These connections are used to perform the cube-building process. To provide the Crystal Reports reports, the Tivoli Discovery Interface connects to the enterprises databases through an ODBC data source
63
connection. A network connection to the TDS file server is used to get information from the cubes in order to build multidimensional reports. When you issue a request for information from the Tivoli Discovery Interface, Tivoli Decision Support either reads the information from the database directly, for Crystal Reports reports, or reads the cube file previously created by the Tivoli Discovery Administrator, for PowerPlay reports. The information is then returned to the Discovery Interface and presented to the user in a graphical format. A detailed explanation of the process described above will be discussed in the subsequent sections of this chapter.
64
INVENTORY
Step One
DM
TEC
SQL Queries
The most bandwidth-intensive task is the cube-building process which is executed by the Discovery Administrator. Figure 24 shows step one. A predefined set of SQL queries are executed on the database from this machine. All the queries are stored in the Cubes.mdb file. These queries perform exhaustive interrogation of the database and may take a long time to complete. In step two, as shown in Figure 25 on page 66, the raw data is gathered from the database through the SQL queries. The Discovery Administrator then creates the calculated columns and stores all the data on the TDS File server in the $TDS\data\export directory.
65
Step Two
INVENTORY
DM
TEC
Raw Data
Processed Data
If the query has been designated to export to Drill-through databases, the Discovery Administrator also writes the processed data to a table of the same name as the query in the DrillThru.mdb file stored in the $TDS\data directory on the TDS File server. This table will be used during a drill-through operation issued from the Discovery Interface.
Step Three
Model Processing Raw Data
Transformer File
Processed Cube
66
Figure 26 on page 66 shows step three. The Discovery Administrator first retrieves the information from the model files and the data stored in the delimited text files. Second, the information is stored into a Cognos Transformer file format. The Discovery Administrator then runs Cognos Transformer and processes the raw data packing it into a cube. The cube file is now ready to be transferred to the TDS File server machine. The Discovery Administrator then writes the cube file back to the TDS File server on the $TDS\cubes\temp directory.
Step Four
Rover Program
As shown in Figure 27, the fourth step is when the Discovery Administrator runs a program called Rover to copy the completed cube from the $TDS\Cubes\Temp directory up to the $TDS\Cubes directory on the TDS File server. After studying this process, it is evident that there will be quite a bit of network traffic generated between the Discovery Administrator, TDS File Server, and Database repository. It is, therefore, suggested that the Discovery Administrator machine be installed on the same physical network and in the same location as the other two servers. This will result in faster cube build times and also prevent the process from failing due to packets timing out. There can be multiple Discovery Administrator machines, each building cubes for a separate Tivoli Decision Support Discovery Guide. All of them can point to the server content on the same network TDS File server. A distributed Discovery Administrator environment is suggested when there is a need to speed up the cube-building process. Another reason can be for administrative purposes when there are various people involved with the available guides
67
and there is a need to localize the Discovery Administrator console for specific users.
Note Note
When planning to have two or more Discovery Administrators, install the complete TDS Discovery Guide into only one Discovery Administrator machine. Installing separate Discovery Administrator Machines to build different cubes for the same guide IS NOT recommended.
DISCOVERY INTERFACE
Figure 28. Viewing the multidimensional reports
Figure 28 shows the process of accessing the cubes in order to make the information available on the Discovery Interface. The $TDS\Cubes directory on the file server contains all of the cubes (.mdc files). They are generated periodically through the TDS Discovery Administrator. The PowerPlay report (.ppr) files are also located in the same directory. These are installed by installing a new TDS Discovery Guide. Every time the Discovery Interface is
68
started and a specific topic selected, these files are pulled over the network to produce the multidimensional views. In addition, the files Ed.mdb and DrillThru.mdb are also pulled over the network from the TDS File server to the Discovery Interface machine when using Tivoli Decision Support in a network mode architecture Figure 29 highlights the process of accessing the data and the templates in order to have Crystal Reports reports. The $TDS\Reports directory on the file server contains all the Crystal Report files (.rpt). These are also installed by installing TDS Discovery Guides. In a network mode architecture, these files are pulled over the network when a Crystal Report view is selected from the topic map.
Crystal Reports
Data
INVENTORY
Templates
TEC
DM
DISCOVERY INTERFACE
DATABASES
The selection of a Crystal Report identified by a page-like icon in the topic map performs two network operations. An ODBC connection is established with the database from where the data is collected, and, the respective report templates stored in the $TDS\Reports directory on the file server are pulled over to the client machine where the TDS Discovery Interface is installed. The Crystal Report is then displayed on the Discovery Interface. The Discovery Interface also reads information from the $TDS\Data directory of the TDS File server. The necessary information includes all the topic map data, which views are in which Categories and Topics, the related views, the view hint description, and keywords for searches. This information is found in the Microsoft Access databases (.mdb) files.
69
.
Note
When a new TDS Discovery Guide is installed on the TDS File server, the files Ed.mdb and Cubes.mdb are updated to include the new topic maps and view information. Both the Discovery Interface and the Discovery Administrator components access the contents of these files at application start-up time.
INVENTORY
ODBC Connection
DM
TEC
DATABASES
70
I n network mode, only the Discovery Interface component, PowerPlay in Standard Mode, Crystal Reports (if it is intended to change the Crystal reports), database client, and the ODBC driver are installed on the client machine. The other components are installed elsewhere as described below. The user of a client system only has access to the Discovery Interface. The Discovery Administrator machine is the only point to administer Tivoli Decision Support.
The configuration diagram shown in Figure 31 details the connections made between the various components in a Network Mode architecture. This is the architecture that should be used for medium to large environments
ODBC
INVENTORY
DM
TEC
DATABASES
Client system Client system running Discovery running Discovery Interface Interface
File server running server components including guides, cubes and models
A network mode installation makes it possible for multiple users to access Decision Support in the various areas of the enterprise. All clients connect to the file server where the cubes are being updated on an pre scheduled, preferable off hours, basis. This is a more realistic and purposeful deployment method where the various Tivoli Decision Support operations are separated. Only the system administrator will be able to perform the Discovery Administrator tasks, with the client systems only having access to the Discovery Interface to open and work with the views of data. A mixture of the two installation methods can also be implemented. However, this will depend on the roles and resource capacity of the machines on which Decision Support runs. The Discovery Administrator and TDS File server can be installed on the same machine with multiple clients accessing the shared
71
server component directory. A thing to watch out for, though, is that a cube build may severely reduce the responsiveness of a Client Discovery Interface that is trying to view and drill down into the data at the same time as the cube build.
72
be processed by a TDS Discovery Guide, who is going to utilize the TDS Discovery Interface, who will be responsible for the TDS Discovery Administration machines, and so on. In addition, all Tivoli Service Desk applications and servers of the customer as well as their physical location should be documented. The situation we are faced with represents an architecture in which all the servers are centrally located in one region running on the same network segment as the Problem Management databases. There exist no requirements for decision-making personnel at other locations to access the Decision Support business information.
Since TDS will be used by a single user, we can choose to install it in stand-alone mode. In stand-alone mode, everything runs on one system, and none of the modules are shared with anyone else on the network. In the environment under analysis, this case holds true. We have suggested a single machine to perform the Tivoli Decision Support functionality. This allows for central management and does not put extra traffic on the network.
73
Communication is only between the Tivoli Decision Support system and the problem management database.
74
TMR Server
TEC
Inventory
TEC
Database Repository
Distributed Monitoring
Software Distribution
Gateway
Gateway
Endpoints
Endpoints
As before, our first analytical decision is to determine whether we should deploy Tivoli Decision Support in a stand-alone or network mode. In most business, like the one managed in our example, there will be a need to present reports to various levels of management. These reports must present a wide range of data views filtered to the level of detail required by the person operating the Discovery Interface client. There should also be a dedicated systems administrator who is responsible for performing all the systems management technical administration for the business. This individual will, most probably, also be responsible for maintaining the entire Tivoli infrastructure.
75
TMR Server
TEC
Inventory
TEC
Database Repository
Distributed Monitoring
Software Distribution
TDS Administrator
Gateway
Gateway
The TDS deployment solution shown in Figure 34 will have to be a network mode installation. The TDS File server, Discovery Interfaces, and Discovery Administrator will be set up to run from the main site along with the other Tivoli enterprise systems. There will be no clients located at the remote sites (if any) and only one Discovery Administrator is required. The solution shown in Figure 34 will integrate with the database repository where the administrators will run queries to build its multidimensional cubes and the Discovery Interface can read off the Crystal Reports reports. The Discovery Interface machine will also make a connection with the TDS file server to access the cubes. All these components are located in the same LAN and, thus, the Tivoli Decision Support process should function optimally.
76
77
DM
TEC
Inventory
TEC
Site A
Database Repository
Distributed Monitoring
Software Distribution
Site B Site C
Gateway Gateway TMR Server TMR Server
Endpoints Endpoints
For this case, it is very clear that we will need to deploy Tivoli Decision Support in a network-mode. Unlike the single TMR environment, there will be decision makers who need to have access to the business information. This means that we will need to have Discovery Interfaces installed on the remote sites B and C. As in the single TMR environment, there will be a need to present reports to various levels of management, but, in this case, access to the TDS system from remote sites is required.
78
DM
TEC
Inventory
TEC
Software Distribution
Site B
TMR Server Gateway TDS Secondary File Server
TDS Clients
As in all other cases, there should be a dedicated Discovery Administrator machine that will be installed on the same network segment as the database repository with the intention of improving performance at the time of the cube generation process. If the installation of additional TDS Discovery Guides is required, this will, in turn, result in increased resource utilization on the administration console. Consequently, the cube-building process will take longer to complete. In order to distribute the work load, it is appropriate to have additional Discovery Administrator machines installed. A scenario where we can have one TDS Discovery Guide per Discovery Administrator machine would be the best approach, but this results in the systems administration tasks being distributed. A separate Administrator machine should then only be
79
implemented if the business requires that different people manage the guides that are relative to their job roles.
Note
The multi-dimensional Powerplay cubes are scheduled to be built on a regular basis. Depending on the number of TDS Discovery Guides installed, the number of cubes will increase. The cube building process occurs in a sequential order. The addition of more Discovery Administrators for different Guides will enable the workload to be shared. Now, we face the issue of the distributed Discovery Interface clients that are needed by personnel at the remote sites. Section 4.4.2, The Discovery Interface on page 68 discusses the operations and the communication that takes place between the Discovery Interface and all other TDS components. The Discovery Interface will connect to the TDS File Server to retrieve the Powerplay cubes and to the database server to retrieve the data needed to build Crystal Reports. The problem we face is that this connection will now take place over the WAN. This will obviously result in a delayed response while trying to access the various views over the network. To solve this problem, we recommend the implementation of one TDS file server per remote site. The main TDS file server will reside in the same LAN as the Discovery Administrator machine and will be responsible for serving both the Discovery Administrator machine and all the Discovery Interfaces in the main location. On the other sites, the TDS file server will be responsible for serving the Discovery Interfaces there. This configuration will have a significant positive impact on the response times for clients accessing information stored in the cubes. To implement this architecture, some sort of replication needs to take place from the main TDS File server to the local site TDS File servers. One method would be to write a script or program that triggers a file transfer from the main TDS file server to the remote TDS file servers. This process should be executed soon after the scheduled cube build takes place. This will ensure that cubes at all locations reflect the current environment. Besides the cubes, the TDS drill-through database files also need to be replicated.
80
Note
The script has to perform a similar function to the TDS rover utility that copies the cubes from the $TDS/Cubes/Temp directory to the $TDS/Cubes directory on the TDS File server. This will ensure that the transfer of cube files does not fail if the Discovery Interfaces are using them.
81
Discovery Administrator errors include, but are not limited to, installation errors, ODBC connectivity issues, Visual Basic script syntax errors, SQL syntax errors, runtime errors, and data anomalies. Basically, if the error occurs before the data source file is created, the problem lies either with the Administrator or with the data that is attempting to be exported. After the transformation process, you can see if a cube does not build by examining the Rover window. This window appears as an icon on the task bar at the time of the cube-building process. One error you may receive is Error 53 - Cube did not build. This error is a consequence of a query that returns no data. Cognos creates a log file called EDAdmin.log in the $TDS directory. In addition, you may check the <model>.log file that is created each time a cube is being built. This file gets saved to the $TDS/Model directory. You should look for the entry (TR3201) update of cube XXX is incomplete and find the problem. If a <model>.log file gets created, the cube build has exported the data successfully. The problem then either lies with Cognos or the cube file has not been successfully copied from the $TDS\Cubes\Temp directory to the $TDS\Cubes directory. Another Rover error you may experience is Error 70 - Permission Denied. In this case, Rover tried to copy the temporary cube to the $TDS/Cubes folder and a user has a view opened that references the cube that is being built. The solution is to either try to build the cube later or ask all users to close the Discovery Interface. If Rover has timed out, you should copy the cubes manually. A failed copy message from Rover may also mean one of two things: The user does not have write/execute permissions on the $TDS directory. The cube does not contain at least one compressed record. If you think these two criteria have been satisfied, check the <model.log> file and the EDAdmin.log to find out why the cube did not generate. Cognos errors include, but are not limited to, General Protection Fault errors (GPFs), errors resulting from an incorrectly configured model (.pyg file), and data anomalies. Cognos errors are the opposite of administrator errors and can only originate once the data source file has been created.
82
Discovery Interface does not close down properly If you shut down Windows and the Discovery Interface is still open, you may receive an error that TDS is not closed. You should always close the Discovery Interface before shutting down Windows in order to properly close the OLE link between the Discovery Interface and Cognos PowerPlay. Receive load_graph_from_powercube encountered error message This error is received when you attempt to display a view for which a cube is not built. You should certify that the cube is built using the Discovery Administrator Component. Discovery Interface does not display published tab To be able to publish push documents from the Discovery Interface, the published tab must be available from the Hints pane. If this tab is not available, select the Enable ADI Publish option on the Options dialog box of the Tivoli Discovery Publisher.
83
84
5.1 Overview
As mentioned above, this chapter suggests a Tivoli Decision Support implementation solution for the IBM SDC West Environment. The SDC West provides strategic I/T out-sourcing services to over 50 IBM and commercial customers all over the United States. The SDC delivers a broad scope of solutions including server management (from S390 servers to NT servers), desk-side support, and customer care services. They do this through four teams: Enterprise Services Delivery, Distributed Services Delivery, the Business & Technology office, and the Customer Service Center. They are one of four geographic service delivery centers in IBM Global Services. The purpose of the case study is to detail a Tivoli Decision Support solution, which will be integrated into the IBM SDC West Tivoli architecture with the intention of providing a simple flowing migration from the reporting tools currently implemented.
85
We aim to exploit the Tivoli Decision Support technology using IBM as a showcase with the goal of reducing IBM IT reporting costs. This chapter will provide a list of Future Requirements for Tivoli Decision Support where ideas for extra features will be presented based on our experience with the case study.
5.2 Methodology
Now, we will practice the methodology described in Chapter 3, Methodology on page 23. The case study will start with a requirements-gathering phase where a survey of reporting requirements and the current SDC West environment will be carried out. We will then use predefined techniques to analyze all the received information to present a technical proposal to meet the customers requirements. A procedure-driven deployment exercise will then be presented illustrating the steps and task flow for each phase of the roll out.
86
5.2.3 Deployment
Section 5.9, Tivoli Decision Support deployment process on page 144 delineates the implementation of the design defined in Section 5.8, Suggested architecture and solution design on page 140. A high-level task flow will be presented for each step of the deployment process.
Case study
87
similar to the server infrastructure with the need for fewer high-availability features.
TEC Consoles
Boulder Gateways
Boulder Gateways
Endpoints
Endpoints
Endpoints
88
Case study
89
NotesView, Netfinity, AMA, CRT, MQSeries, NFS subsystem, SNA links, hardware, and SP switch interface. These events are stored in a Sybase SQL Server Database and will be used as a source of information for the Tivoli Decision Support Event Management Discovery Guide.
5.3.6 Tivoli DM and monitors for performance and capacity trend data
The SDC West has implemented Tivoli Distributed Monitoring to provide standard monitors for AIX and Windows NT servers. These monitors store the data collected into flat files, which are processed later, on the Hub TMR. The standard monitors for AIX servers are: CPU Usage used to compute hourly average and sampled every minute Process memory and paging space sampled hourly Network packets (in/out) and errors (in/out) sampled daily. File system snapshot sampled daily. Today, the SDC West makes use of Netfinity Capacity Manager in order to collect performance and capacity data from the Windows NT servers. The DM monitors are used to collect availability information. The following are the standard monitors for Windows NT servers that will be implemented at the time of the TDS deployment: CPU usage - This is the percentage of processor time monitor (NT_Processor monitoring collection) and is sampled every 10 minutes. Memory usage - This is the pages/sec. and Available Bytes monitors (NT_Memory monitoring collection) and is sampled every 10 minutes with an hourly average.
90
Network Activity - This reflects the ammount of Packets Outbound Errors and Packets Received Errors (NT_NetworkMonitor monitoring collection). Disk Utilization - This is the percentage of utilization of each drive or disk on the system. Currently using the Disk Space Percentage Used monitor in the Universal monitoring collection. The way monitors get distributed to servers is dependent upon the relationships between the various Tivoli container objects. Essentially, monitors are grouped in profile managers (and their sub-profiles) by account, platform, function, and so forth on the Hub TMR following a consistent naming convention. On the Spoke TMRs, similarly-named profiles exist, which have corresponding names and simply act as containers for subscribed servers. The Hub TMR profile manager then subscribes the Spoke TMR profile managers to it. Figure 38 on page 92 shows the actual profile structure implemented by the customer.
Case study
91
HUB TMR
NCO
Distributed Monitoring
Customer ABC
NCO NT Monitors
Spoke TMR
NCO.TMR2
TMR2
Customer ABC.TMR2
NCO AIX.TMR2
NCO NT.TMR2
ABC NT.TMR2
NT Managed Node
92
requirements. Future requirements, as well as future recommendations, will be discussed in Section 5.10, Future reporting requirements on page 162.
Note
It is beyond the scope of this redbook to identify all the reports generated and used by IBM SDC West. We will only consider the reports that cover the basic metrics of availability, capacity and performance, response time to failure (sla), and cost if available.
Case study
93
Tool 1
Analysis
Tool 2
. . .
Tool n
Reports
IBM SDC West uses two methods for reporting. Despite valiant efforts, these two methods are still facing the problem of using multiple tools, dealing with different sources of information, and storing the data in files with different formats. The first one will be referred as the in-house method, which has been developed by the IBM Service Delivery Center Tucson, Arizona. The second method, Server Resource Management (SRM), has been developed by the IBM Global Service South Performance & Capacity team. For organizational and security reasons, both methods utilize the concept of accounts to access the reports. Each group of people responsible for their account has the capability of looking only at the reports that are related to their business and interest. There are IBM internal accounts, which are related to IBM internal departments or locations, and external accounts, which are related to IBM customers. All these reports can be reached either through the IBM Intranet or Internet. For the purpose of this case study, we will look at the reports of an internal IBM account called Network Computing Offerings (NCO). The NCO account uses both methods of reports. We will identify the reports available in the actual SDC solution for reporting and then map these reports with those produced by Tivoli Decision Support.
5.4.2.1 The in-house method The in-house solution relies on many sources for collecting data, such as:
UNIX-based tools:
94
The vmstat command is sampled every minute and used to compute an hourly average. Process memory is checked witht the ps gv command and paging space percentage full is checked with the lsps -a command and sampled hourly. Network packets in/out and errors in/out are sampled daily using the netstat command. File systems snapshot is checked with the df -k command and sampled daily. Tivoli Applications: Tivoli Distributed Monitoring Tivoli NetView Tivoli Enterprise Console Netfinity Capacity Manager Figure 40 shows how the process is used for reporting the in-house method:
Tivoli A pplications
HT M L and Ja va form at
AIX Tools
IBM Intra net Phase 3 - Report G eneration
Netfinity M onitors
Phase 2 - Data Processin g Phase 1 - Data Co llection
Case study
95
The three phases of the in-house process are detailed in the following list: Phase 1 - Data Collection The monitors collect relevant information according to some predefined thresholds and write the data to files on a file server. Phase 2 - Data Processing As soon as the data is collected, it is processed daily by the in-house Perl scripts and programs populating some flat files for each server and metric. The rolling year of data is kept. Phase 3 - Report Generation The reports are generated in HTML and Java format showing each month's data in tabular format with links to applets that graph the data sets for the year by account (server group).
5.4.2.2 The SRM method In order to satisfy the actual requirement for reporting, IBM Global Service South Performance & Capacity team has developed a set of Server Resource Management tools to expand the performance and capacity trending on the Distributed Systems Management (DSM) platforms and applications, such as AIX/UNIX, Windows NT, OS/2, SAP R/3, and Lotus Notes. The SRM tool set is used for both internal and external account reporting.
The SRM solution relies on existing monitoring tools for each available platform, such as Netfinity Capacity Manager (CISC platforms), Perl Scripts (RISC platforms), Zperstat (for SAP R/3), and NotesView (Lotus environment) to collect and pass on event data. The SRM tool is divided into four main components that enable the performance and capacity trending process as detailed in the following list: Phase 1 - Collection The data is passively collected in multiple formats and stored into files. Phase 2 - Transmission The SRM tool receives the data and converts it to a common format. Then the data are stored in a single database in a convenient format. Phase 3 - Analysis An automated process provides Distributed Systems Management resource trending and exception analysis in this phase. Phase 4 - Web Preparation and Presentation
96
The data is processed and HTML and Java reports are generated and published on the IBM Intranet. Figure 41 graphically represents the SRM collection and reporting process:
SRM
Netfinity Manager
SAS/VM
DB2
Zperstat
IBM Intranet
NotesView
Phase 2 - Transmission
Phase 1 - Collection
5.4.3.1 In-house reports The following are the reports available for the NCO account using the in-house method for reporting: 1. Performance and capacity metric summary
As shown in Figure 42 on page 98, this report shows CPU utilization, memory paging utilization, and network I/O (IP packets and errors count)
Case study
97
per server for a specified month. There are also links from which you can have access to detailed reports by server. For CPU and Memory/Paging, the first pair of numbers lists the averages of all samples and the average of the eight highest samples of data. The second average gives some indication of load during high usage bursts, which may or may not occur during consecutive prime shift hours. If the averages are similar, it can be inferred that usage of the system is relatively steady throughout the day. The number in parentheses lists the daily high sample. For Network IO, the daily IP packet and error counts in parentheses are shown for input and output.
Network IO Utilization DASD Usage Report Process Memory and Paging Utilization
The following are descriptions of the detailed reports of the server snjs1sm1.sanjose.ibm.com .
98
Figure 43 states the high sample, the average of all samples, and the average of the highest eight samples for CPU utilization by server:
CPU utilization by server For UNIX servers, the data is collected every minute using the vmstat command. For Windows NT servers, it is done by Netfinity Capacity Manager. The data shown for future months is from the previous year. Figure 44 on page 100 states the high sample, the average of all samples, and the average of the highest eight samples for process memory and paging utilization.
Case study
99
Figure 44. Detailed report - process memory and paging utilization by server
Process Memory and Paging Utilization by Server For UNIX servers, the data is collected on a hourly basis using the ps and the lsps commands. For NT servers, all data is collected by Netfinity Capacity Manager. Figure 45 on page 101 states the number of IP packets In and Out.
100
Network I/O utilization by server For UNIX servers, daily IP packet and error counts are collected using the netstat command for input and output. For Windows NT servers, the data is collected using Netfinity Capacity Manager. Figure 46 on page 102 shows the size and daily percentage of kilobytes used by the file systems for each server.
Case study
101
DASD utilization by server For AIX Server, the data is collected using the df -k command. For Windows NT servers, the data is collected using Netfinity Capacity Manager.
102
Alert Summary
Alert summary report Items covered by this report include the number of alerts and log entries, CPU and disk utilization, node up or down, and services stopped or started. Refer to Figure 48:
Log Entries:
Tue Jun 8 05:49:33 MDT 1999 System [amawest.boulder.ibm.com (9.99.188.204)], Message [Node down (unpingable from monitor server)], Subsystem [os] Tue Jun 8 06:01:43 MDT 1999 System [amawest.boulder.ibm.com (9.99.188.204)], Message [Node up (pingable from monitor server)], Subsystem [os] Tue Jun 11 05:52:39 MDT 1999 System [amawest.boulder.ibm.com (9.99.188.204)], Message [Node down (unpingable from monitor server)], Subsystem [os] Tue Jun 13 05:50:09 MDT 1999 System [amawest.boulder.ibm.com (9.99.188.204)], Message [Node up (pingable from monitor server)], Subsystem [os] Tue Jun 13 05:50:09 MDT 1999 System [amawest.boulder.ibm.com (9.99.188.204)], Message [Node up (pingable from monitor server)], Subsystem [os] Tue Jun 14 06:00:29 MDT 1999 System [amawest.boulder.ibm.com (9.99.188.204)], Message [Node down (unpingable from monitor server)], Subsystem [os] Tue Jun 14 08:59:54 MDT 1999 System [amawest.boulder.ibm.com (9.99.188.204)], Message [Node up (pingable from monitor server)], Subsystem [os] Tue Jun 15 05:51:12 MDT 1999 System [amawest.boulder.ibm.com (9.99.188.204)], Message [Node down (unpingable from monitor server)], Subsystem [os] Tue Jun 15 08:42:49 MDT 1999 System [amawest.boulder.ibm.com (9.99.188.204)], Message [Node up (pingable from monitor server)], Subsystem [os] Tue Jun 16 06:20:51 MDT 1999 System [amawest.boulder.ibm.com (9.99.188.204)], Message [Node down (unpingable from monitor server)], Subsystem [os] Tue Jun 16 08:27:16 MDT 1999 System [amawest.boulder.ibm.com (9.99.188.204)], Message [Node up (pingable from monitor server)], Subsystem [os]
Case study
103
5.4.3.2 SRM Reports The following describes the reports available for the NCO South account using the SRM tool set. We will describe the reports for Lotus Notes application, AIX Performance, and Windows NT Performance. 1. Lotus Notes reports
The following are the reports available for NCO South Lotus Notes servers: Mail Server Report This report gives daily, weekly, and monthly statistics on Lotus Notes mail servers in the NCO South account, such as number of concurrent users, number of mail traffic messages, and response time. The report shown in Figure 49 is the monthly utilization statistic:
Database Server Report This report gives daily, weekly, and monthly statistics on Lotus Notes database servers in the NCO South account, such as number of
104
concurrent users, the number of replicated documents, and so on. The report shown in Figure 50 is the monthly utilization statistic. There are also reports that include weekly and daily statistics.
Mail Hub Server Report This report gives daily, weekly, and monthly statistics on Lotus Notes database servers in the NCO South account, such as number of total mail traffic (MBytes) and the average response time. Figure 51 on page 106 shows the daily utilization statistic. Other reports include weekly and monthly statistics.
Case study
105
MTA Server Report This report gives daily, weekly, and monthly statistics on Lotus Notes MTA servers in the NCO South account, such as number of SMTP transferred messages and the average response time. Figure 52 shows the daily utilization statistic. Other reports include weekly and monthly statistics.
Hourly Response Time Report This report gives hourly response time on the Lotus Notes database, MTA hub, and mail server in the NCO South account. Figure 53 shows the hourly response time for the Lotus Notes mail server.
106
Hourly Concurrent Users Report This report gives hourly concurrent users on the Lotus Notes database and mail server in the NCO South account. Figure 54 shows the concurrent users report for the Lotus Notes mail server, which is produced daily.
Case study
107
Hourly Sessions per Minute Report This report gives hourly sessions per minute on Lotus Notes servers in the NCO South account. Figure 55 shows the hourly sessions per minute by server report, which is produced daily.
Hourly Mailbox Size Report This report gives hourly sessions mail box size on Lotus Notes servers in the NCO South account. Figure 56 shows the hourly mail box size by server report and is produced daily.
108
Hourly SMTP Transferred Messages This report gives hourly SMTP transferred messages on Lotus Notes servers in the NCO South account. Figure 57 shows the hourly transferred messages by server report, which is produced daily.
2. AIX reports
The following reports are produced by SRM for AIX servers in the NCO South account. CPU/Storage Utilization and IO Wait Report This report gives daily, weekly, and monthly CPU Utilization of the AIX servers in the NCO South account. Figure 58 on page 110 shows the monthly utilization statistics. There are also reports that include weekly and daily statistics. In addition, we can access data for a specific server.
Case study
109
Warnings Only (daily, weekly, monthly) This report gives the daily warnings during prime time of excessive CPU utilization by AIX servers in the NCO South account according to specified thresholds. This is a subset of the report shown in Figure 58, which contains only those servers with exceeded thresholds. Hard Disk/File System Utilization Report This report shows the hard disk and file system availability by AIX servers in the NCO South account. Figure 59 on page 111 shows the disk and file system utilization by server report, which is produced daily.
110
Figure 59. AIX servers - Hard disk and file systems utilization report
AIX Capacity Summary This report gives a summary of the percentage of utilization for all available AIX servers in the NCO South account. The report shown in Figure 60 shows statistics on CPU utilization, Run Queue status, Memory status, IO Wait status, and Disk space status on a rolling month basis for all servers.
Case study
111
3. Windows NT Reports
The following reports are produced by SRM for Windows NT servers in NCO South account. CPU, Memory and Disk Utilization (daily weekly monthly) This report gives daily, weekly and monthly CPU Utilization of the Windows NT servers in the NCO South. Figure 61 on page 112 shows the monthly utilization statistics. Other reports include weekly and daily statistics. In addition, we can access data for a specific server.
Windows NT capacity summary This report gives a summary of the percentage of utilization of all the available Windows NT servers in the NCO South account. The report shown in Figure 62 on page 113 shows statistics on CPU utilization, Memory status, and Disk space status on a rolling month basis for all servers.
112
Case study
113
Modelling Once the workload characterization baseline is in place, What If models and scenarios may be built based on customer's estimated business drivers and/or the associated server resource to be assigned to meet future business drivers. With the actual solution for reporting being used by the SDC West, workload balancing and modeling is performed platform by platform, for example, RISC to RISC and CISC to CISC (but not RISC to OS/390 or other cross-platform combinations). Tivoli Decision Support along with its Discovery Guides is the best solution for the customers objectives and performs exceptionally well with most business strategies. Tivoli Decision Support best delivers on server requirements for expanded Data Collection, Database Interface, Workload Characterization, Modeling, and Web Publishing.
Customer requirements
Performance Measurement Cost Prediction Response Time to Failures (SLA) Detailed Application Related reports Network Capacity and Performance analysis
114
actual requirement for reporting. In this section, we will show the reports from various TDS Discovery Guides that have been analyzed in Section 5.6, Mapping Tivoli Decision Support Discovery Guides on page 114 and compare them to the customers actual scenario. The following table shows the customer current reports and a reference for the recommended TDS report. These report will be displayed in Section 5.7, Tivoli Decision Support reports and business information on page 116.
Table 11. Detailed mapping reference table
Lotus Notes - Database server report. refer to Figure 50 on page 105 Lotus Notes - Mail Hub server report. Refer to Figure 51 on page 106 Lotus Notes - MTA server report. Refer to Figure 52 on page 106 Lotus Notes - Response time report. Refer to Figure 53 on page 107
Figure 77 on page 133 Figure 78 on page 134 Figure 79 on page 135 Figure 75 on page 131 Figure 78 on page 134 Future requirement Figure 80 on page 136
Case study
115
Future requirement
Future requirement
116
This guide relies on Tivoli Distributed Monitoring as the source of the network activity data and, if available, the Tivoli Inventory supporting enterprise system hardware information. The objective of the Tivoli Decision Support for Server Performance Prediction (SPP) Discovery Guide is to provide the customer with the capacity to plan using basic trending of key system metrics. Most workstation and server performance problems in a network can be avoided by identifying the system workload on time before it exceeds the capacity of the systems. The SPP guide has subsections in the form of questions, such as: How might I improve performance on my systems? How is my overall performance? What performance problems are on the horizon? By clicking on these question icons, information can be obtained about your environment in a report format. The following are some reports available with the Server Performance Prediction Discovery Guide.
All System Metrics report The following metrics are intended for both UNIX and NT platforms:
CPU percent busy (user time and system time) CPU run queue length Disk I/O rate and Disk transfer rate Memory page-in/out rate (pages/sec.) Memory page-scan rate (seeks/sec.) Network packet collision rate (packet/sec.) Network packet input/output rate (packets/sec.) Network packet input/output error rate (packets/sec.) (packet # on NT) Figure 63 on page 118 shows the All System Metrics report, which displays a summary of all system performance metrics sorted down by system purpose.
Case study
117
CPU utilization by server By using the drill down facility, we can have some variations of the All System Metrics report. Figure 64 on page 119 only shows CPU utilization for a particular server called ariel .
118
Memory Utilization by Server From the All system metrics report, we can choose to show only the memory utilization. Once again, by using the drill down capability, we select only the DNS servers available in our network. Figure 65 on page 120 shows an example of the memory utilization for all DNS servers in July, 1999.
Case study
119
Network I/O utilization by server This report provides network packet I/O rate, network packet I/O error rate, and network packet collision rate.
Figure 66 on page 121 shows example network I/O rate information for all SAP R/3 servers.
120
CPU utilization and memory page rates by operating system As shown in Figure 67 on page 122, this report provides daily information about CPU Utilization (% units) and memory page rates by operating system for all Oracle servers in our environment.
Case study
121
Summary by operating system This report gives a summary of utilization of all available servers by operating system for all collected metrics.
Figure 68 on page 123 shows the summary of all AIX servers for July, 1999.
122
Forecasts reports One of the most important features available in this TDS Discovery Guide is the ability to provide forecasts. We can, for example, predict future average CPU utilization of all servers in order to avoid a possible system slow down. In Figure 69 on page 124, we show examples of 30, 60, and 90 day average forecasts of CPU utilization of all servers grouped by system purpose.
Case study
123
Under-provisioned and Over-provisioned servers This view highlights systems where the CPU activity is disproportionate to the network activity. If you have a system that shows very high CPU utilization but has relatively low network activity, that system may be under-provisioned (the CPU is inadequate for the work load). If you have a system that shows very low CPU utilization but has relatively high network activity, that system may be over-provisioned (the CPU is excessive for the work load). The measure used for this view is the processor overload. This is expressed as a percentage of the difference between the CPU and network utilization divided by the network utilization. Figure 70 on page 125 is an example of this report for all SAP R/3 servers.
124
Case study
125
The following are some reports available with the Event Management Discovery Guide.
SLA statistics by event class This report ranks the different types of events that you have handled according to which ones have most often not been resolved within the bounds of the Service Level Agreement. Figure 71 shows information, such as the percentage of met, missed, and nearly-missed SLAs.
Which events take the longest to fix? This view gives you a snapshot of the average duration (mean time to repair) for events of different classes and highlights the ones that take the longest to fix. Figure 72 on page 127 shows the amount of time in minutes that certain events, such as Link_Down_Cisco and certain, take to be fixed.
126
When are my peak times for event volume? Figure 73 on page 128 shows you the average number of events by event source for each hour of the day. This average is based on the previous 30 days worth of events. This can be helpful in establishing the shift schedules and staffing levels for your service center or in isolating a scheduled activity that is causing problems.
Case study
127
The version of the Tivoli Domino Management Discovery Guide used during the course of this book was a beta code version. Functions, features, and supported environments for this product are subject to change without notice any time before or after general release.
128
The content and detail of the reports provided by the Domino Management Discovery Guide are dependent on the type of data that can be collected and the database schema in which this data is stored. In a TMR environment, Tivoli Manager for Domino monitors are defined and distributed to collect data and then stored in a relational database. Once this data is in the relational database, the Domino guide can be used to extract and analyze the data through reports in the TDS Discovery Interface. Such reports include server performance, server ranking, and server prediction reports. For setting up the Domino Discovery Guide, the database schema must be defined first. Domino-specify monitors are then distributed in a TMR environment, and a Domino roll up job is run nightly to collect and aggregate the data into the database table. The Tivoli Discovery Administrator is used to define queries to extract data from this table into a comma-separated values (.csv) file. The Cognos Transformer builds the multidimensional cube from this file, which can then be reported by Cognos PowerPlay. Finally, once the guides and roles have been defined, the Tivoli Discovery Interface is used to view these reports. Since the Domino guide uses Domino-specific DM monitors for reporting, this guide must also use the Domino Roll Up module, which comes with the Domino Management Discovery Guide. This Domino Roll Up module, which is installed in a TMR environment, contains the scripts necessary to create the database schema and perform the Domino roll up job. The following are some reports available with the Domino Management Discovery Guide.
Domino Network Traffic This view shows how much TCP/IP traffic is sent and received by the Domino servers. These statistics reflect the values in bytes from the NET.TCPIP.BytesReceived and NET.TCPIP.BytesSent monitors from the Domino servers. Figure 74 on page 130 shows the monthly utilization statistics by hostname report.
Case study
129
Domino mail statistics These reports show the various Domino Mail statistics, such as:
Mail Total Routed Figure 75 on page 131 shows the total number of mails routed by the Domino servers. These statistics reflect the values from the Mail.TotalRouted monitor from the Domino servers. The daily total values in bytes are shown for July 1, 1999.
130
Mail total KBytes transferred Figure 76 on page 132 shows the Total KBytes transferred by the Domino servers. These statistics reflect the values from the Mail.TotalKBTransferred monitors from the Domino servers. The daily total values in KBytes are shown for July 1, 1999.
Case study
131
Domino number of user statistics Figure 77 on page 133 shows the number of users managed by the Domino server Cartman1. These statistics reflect the values from the Server.Users monitor from the Domino servers. This view shows the server statistics by their hourly values. This is a good time to start looking for hourly trends to find specific server problems. The hourly values are shown for July 1,1999.
132
Domino mail average delivery time Figure 78 on page 134 shows the mail average Delivery time statistics for all Domino servers. The average daily message count for the mail average statistics is shown for July, 1999. The statistics shown in this report reflect the values from the Mail.AverageDeliverTime monitor from the Domino servers.
Case study
133
Domino replication statistics Figure 79 on page 135 shows the amount of requests for replication for the Domino servers. The daily total requests count for the mail average statistics is shown for July, 1999. The statistics shown in this report reflect the values from the Domino.Requests.Per1Hour.Total monitor from the Domino servers.
134
Domino server average delivery time by hour Figure 80 on page 136 shows the server average delivery time statistics for all Domino servers by hour. The average hourly delivery message time statistics shown are for July 1, 1999.
Case study
135
Figure 80. Domino statistics - Server average delivery time by hour report
Domino server mail box file size Figure 81 on page 137 shows the server Mail Box file size statistics for specific Domino servers by hour. The average hourly mail box size statistics shown are for July 1, 1999.
136
Figure 81. Domino statistics - Mail box file size by server report
Case study
137
The following sections detail reports available with the Network Element Performance Discovery Guide.
Cisco CPU utilization Figure 82 shows the daily average CPU utilization collected by date and by hostname from the Cisco routers in our network. In addition, it shows an average of CPU utilization. Watch this report for trends indicating an increase in router utilization.
Figure 82. Network Element Performance Guide - Cisco CPU utilization report
Name Server speed statistics NetView collects Name Server performance data periodically during the day. Figure 83 on page 139 presents data for a single day during business hours allowing you to spot trends in Name Server utilization and determine peak and off-peak hours. This information can be used to judge the remaining capacity, to schedule maintenance (during off-peak times), or to schedule bulk jobs that depend on name resolutions.
138
Figure 83. Network Element Performance Guide - Name server speed by hour
Top ten problem systems Figure 84 on page 140 shows systems that are going down most frequently (although not necessarily accruing the most downtime) on July, 1999. Many transitions can indicate system problems that should be investigated.
Case study
139
Figure 84. Network Element Performance Guide-Top ten nodes by transition count
140
Boulder
TDS Discovery Interface Primary TDS File Server
Spoke TMR RIM Host HUB TMR TEC Sybase RIM Host TEC TDS Discovery Administrator
DM
Other sites
Secondary TDS File Server TDS Discovery Interface
Figure 85 shows the high-level physical design configuration recommended to the customer. The figure outlines an implementation of Tivoli Decision Support in network mode. Since all main Tivoli components and the database server are installed on the Hub TMR in Boulder, it is advised to implement all Tivoli Decision Support components on the same local network as the Hub TMR Server. In addition to that, a TDS file server will be installed in each of the other sites in order to provide access to the cubes for the Discovery Interface clients. The roles of each component of TDS in this picture will be explained as follows: The Tivoli Decision Support Discovery Administrator The TDS Discovery Administrator server provides the generation process for the cubes. This machine will be installed on the same local network as the Hub TMR, thus, improving performance at the time of the cube generation process.
Case study
141
Cubes vary in size depending not only on the amount of data that is stored in the database and captured in them but also on the time range that is specified at the time they are built. Based on the number of endpoints and servers that are managed by the SDC, it might be a good approach to have reasonable disk space available at the Discovery Administrator machine in order to allocate all the temporary and comma-separated (.csv) files created during the cube-building process. For hardware specifications, see Table 13 on page 145. If the installation of additional TDS guides is required and results in increased resource utilization and, consequently, the time it takes to build all the cubes is not suitable to the customer, it is appropriate to have additional Discovery Administrator machines installed. A scenario where we can have only one TDS Discovery Guide per Discovery Administrator machine would be the best approach. Another reason to have additional Discovery Administrators would be for management reasons where each one would be maintained by different support people.
Note
Note that it is not recommended to split one TDS Discovery Guide into more than one Discovery Administrator machine. The TDS File Server The TDS server components include the models, queries, templates, and other information required to generate views for the Discovery Interface. These components must be installed on each TDS file server in our case study environment. The TDS file server should be a very fast computer to service files and should have a fast network connection to clients. For hardware specifications, refer to Table 13 on page 145. We recommend that you have one TDS file server per site. The one in Boulder should be called the primary TDS file server and will be responsible for serving both the Discovery Administration machine and the Discovery clients in Boulder. At other sites, the TDS file server should be called the secondary TDS file server and will be responsible for serving local clients. This configuration can increase the response time for clients when accessing information stored in the cubes. There will be an update process for the secondary file server that must be started after the cube-building process and after an installation of a new TDS Discovery Guide on the primary file server. This update process is automated by defining scripts, Tivoli Tasks, and Jobs that run in a predefined order and
142
time. For additional details, refer to Section 5.9.2, Installation of the Tivoli Decision Support server components on page 145. Tivoli Decision Support Clients The Tivoli Decision Support client component is the Tivoli Discovery Interface that provides all the tools needed to open and work with views of data from your organizations enterprise databases. This component must be installed on every client machine on customer sites where Tivoli Decision Support is supposed to be used. These clients will have two kinds of connections: A network connection with the local Tivoli Decision Support File server to get all the necessary information stored in the multidimensional cubes for the multidimensional views. A direct ODBC connection with the databases in the Hub TMR server for online reports generated by Crystal reports. Cognos PowerPlay PowerPlay is a third-party application that generates multidimensional cubes and must be installed with Tivoli Decision Support. It must be installed on every Tivoli Discovery Administrator machine and every TivoliDiscovery Interface machine. TDS Discovery Guides Tivoli Decision Support Discovery Guides are used to analyze the enterprises key information. These guides provide users with a comprehensive set of best practices and views into their enterprises databases. All recommended guides for this case study should be installed on each of the Tivoli Decision Support File servers (primaries and secondaries) and imported into the Discovery Administrator machine. The information repository (RDBMS) The TEC database, which is used by Tivoli Decision Support for Event Management Guide, is installed and configured in the Sybase server that resides in the Hub TMR. Similarly, the database for Distributed Monitoring, which is used by the Tivoli Decision Support for Server Performance Prediction Guide, is created in the same Sybase server.
Case study
143
Note
On the existing Sybase server, another database should be created to attend the Server Performance Prediction Discovery Guide data, and an extra table needs to be created on the TEC database to house the information source for the Event Management Discovery Guide. For additional details, refer to Section 5.9.5, Deploying TDS for server performance prediction on page 154, and Section 5.9.6, Deploying the Event Management Guide on page 161
Step
1 2 3 4 5 6
Description
Hardware and software pre requisites installation Install the Tivoli Decision Support server components Install the Tivoli Decision Support Discovery Administrator Install the Tivoli Decision Support client component Install and configure the Server Performance Prediction Guide Install and configure the Event Management Guide
144
Discovery Administrator
Intel Pentium II 400 Mhz 128 MB 80 MB
Discovery Interface
Intel Pentium II 300 Mhz 64 MB 60 MB
Case study
145
Table 14 shows the required steps to properly configure all file servers:
Table 14. TDS file server deployment steps
Step
Description
Where
Primary Tivoli Decision Support file server and all Secondary Tivoli Decision Support file servers Primary Tivoli Decision Support File Server All Secondary Tivoli Decision Support File servers HUB TMR
Reference
Configure the primary Tivoli Decision Support File server as a Managed Node of the Hub TMR Configure the secondary Tivoli Decision Support File servers as Endpoints Create a dataless profile manager called Secondary_TDS_FileServers and subscribe all Secondary Tivoli Decision Support File servers to it. Make the Tivoli Decision Support installation directory a shared resource Create and configure the transfer.cmd and copycubes.cmd scripts Create the tasks and jobs to run the scripts defined in task 5 Schedule the jobs
Framework Planning and Installation Guide Framework Planning and Installation Guide
Primary and all secondary Tivoli Decision Support File servers Primary Tivoli Decision Support File server Hub TMR Hub TMR
Windows NT Server User Guide Task 6: Creating and configuring the scripts on page 146 Task 7: Creating the tasks and jobs on page 148 Task 8: Scheduling the jobs on page 152
7 8
Most of the above steps are fairly straightforward and can be easily accomplished by following the instructions in the referenced material. In the following sections, we will provide more detailed explanations of the following tasks starting with task 6.
Task 6: Creating and configuring the scripts As described in Section 5.8, Suggested architecture and solution design on page 140, all secondary TDS file servers are updated after the generation of a new cube or after the installation of a new TDS Discovery Guide.
146
Because cube generation fails if a user has a view open during the cube updating process, the update process first runs a script (shown in Figure 86) that copies all the generated cubes in the \\PRIM_TDS_FS\SHARENAME\Cubes\temp directory on the Secondary TDS file servers. Later, another script (shown in Figure 87 on page 148) attempts to copy these cubes to the \\PRIM_TDS_FS\SHARENAME\Cubes directory where PRIM_TDS_FS is the hostname of the primary TDS File server and SHARENAME is the share name of the primary TDS file server directory. These two scripts should be created on the primary TDS file server.
Note
The files to be transferred from the primary file server to the secondary file servers are \\PRIM_TDS_FS\SHARENAME\Cubes\* and \\PRIM_TDS_FS\SHARENAME\Data\* where PRIM_TDS_FS is the hostname of the primary TDS File server and SHARENAME is the share name of the primary TDS file server directory. Figure 86 shows the update procedure first script:
@ECHO OFF :: :: The following commands set the drive that correspond to the :: TDS shared directory in the primary TDS file server :: and the Primary TDS File server hostname :: SET PRIM_TDS_FS="Here is your Primary TDS File server hostname" SET SHARENAME="Here is Sharename of the primary TDS File server directory" SET FS_DRIVE=\\%PRIM_TDS_FS%\%SHARENAME% SET FS_Cube=%FS_DRIVE%\Cubes SET FS_DATA=%FS_DRIVE%\Data :: :: The following are the secondary TDS File Server local directories :: SET DIR_TDS="Here is the complete path of the secondary TDS File server directory" SET DIR_CubeS=%DIR_TDS%\Cubes SET DIR_DATA=%DIR_TDS%\Data SET DIR_TEMP=%DIR_CubeS%\Temp :: :: This step copies all new Cubes from the primary TDS File server :: to the temporary directory on the secondary TDS File server :: ECHO ... Transfering the Cubes and configuration files from the primary TDS File Server ... xcopy %FS_Cube%\* %DIR_TEMP% /v xcopy %FS_DATA%\* %DIR_DATA% /v :: Figure 86. The update procedure first script - transfer.cmd
Case study
147
Note
The scripts shown in Figure 86 were used in our Lab environment. The intention is to provide the reader with some example. These scripts may be modified to attend specific environment requirements. Figure 87 shows the update procedure second script:
@ECHO OFF :: :: The following commands set the secondary TDS File Server local directories :: SET DIR_TDS="Complete path of the secondary TDS File server directory" SET DIR_CubeS=%DIR_TDS%\Cubes SET DIR_TEMP=%DIR_CubeS%\Temp :: :: The following command copies the updated Cubes to the Cubes directory :: echo ... Copying the Cubes ... xcopy %DIR_TEMP%\* %DIR_CubeS% /v ::
Task 7: Creating the tasks and jobs In order to have the above scripts executed in an automated fashion, two tasks and two jobs should be created in the SPR_TaskLib Task library. Figure 88 on page 149 shows the parameters of Transfer_Cubes, the first task to be created.
148
This task will run the script transfer.cmd stored in the Primary TDS file server sunfish. You can define this task using wcommands as shown in Figure 89:
# wcrttask -t Transfer_Cubes -l SPR_TaskLib -r senior \ -i w32-ix86 sunfish C:\Program Files\TDS\transfer.cmd \ -c This task runs the transfer.cmd script
Figure 90 on page 150 shows the configuration of the Transfer_Cubes job associated with the Transfer_Cubes task.
Case study
149
If the execution mode is set to parallel, the Transfer_Cubes job will run at the same scheduled time in all hosts in the Secondary_TDS_FileServers Profile Manager.
Figure 92 shows the parameters of Copy_Cubes, which is the second task to be created.
150
Similarly, this task will run the script copycubes.cmd, which is stored in the Primary TDS file server sunfish. You can define this task using wcommands as shown in Figure 93:
# wcrttask -t Copy_Cubes -l SPR_TaskLib -r senior \ -i w32-ix86 sunfish C:\Program Files\TDS\copycubes.cmd \ -c This task runs the copyccubeubes.cmd script
Figure 94 on page 152 shows the configuration of the Copy_Cubes job associated with the Copy_Cubes task.
Case study
151
If the execution mode is set to parallel, this job will run at the same scheduled time in all hosts in the Secondary_TDS_FileServers Profile Manager. You can define this task using wcommands as shown in Figure 95:
Task 8: Scheduling the jobs Once you have defined the tasks and jobs as described in Task 7: Creating the tasks and jobs on page 148, you should define the schedule.
152
Note
These Jobs should be scheduled after the cube-building process is finished. For our example, we are assuming that the cubes are built daily and that this process is finished at 1:00 am. The Copy_Cubes job must run after the Transfer_Cubes job finishes. You can either use the Tivoli Desktop or the command line to schedule the Jobs. You can define this task using wcommands as shown in Figure 96:
# wschedjob -n Transfer_Cubes -L SPR_TaskLib -t 08/03/1999 02:00 \ -r 24 hour -h sunfish -f C:\Program Files\TDS\transfer.log \ -s "Transfer the TDS Cubes to all Secndary TDS File Servers # wschedjob -n Copy_Cubes -L SPR_TaskLib -t 08/03/1999 04:00 \ -r 24 hour -h sunfish -f C:\Program Files\TDS\copycubes.log \ -s "Transfer the TDS Cubes from the temporary directory
Case study
153
Cognos PowerPlay in administrator mode, Sybase open client, and the 32-bit Sybase ODBC driver must be installed on this machine.
Steps
1
Description
Install the Distributed Monitoring Roll-up Patches
Where
Hub and Spoke TMR servers and all Managed Nodes Tivoli Decision Support Discovery Administrator
Reference
Tivoli Decision Support for Server Performance Prediction guide Release Notes Tivoli Decision Support for Server Performance Prediction guide Release Notes Tivoli Decision Support Administrator guide and Tivoli Decision Support for Server Performance Prediction guide Release Notes Tivoli Decision Support Administrator guide and Tivoli Decision Support for Server Performance Prediction guide Release Notes
Install the Tivoli Decision Support for Server Performance Prediction Guide
154
Steps
Description
Where
Tivoli Decision Support Discovery Administrator
Reference
Tivoli Decision Support Administrator Guide and Tivoli Decision Support for Server Performance Prediction guide Release Notes Tivoli Decision Support Administrator Guide and Tivoli Decision Support for Server Performance Prediction guide Release Notes Tivoli Decision Support for Server Performance Prediction guide Release Notes Tivoli Decision Support for Server Performance Prediction guide Release Notes Tivoli Decision Support Administrator Guide and Tivoli Decision Support for Server Performance Prediction guide Release Notes
Tivoli Decision Support Discovery Administrator Tivoli Decision Support Discovery Administrator
Most of the steps in Table 15 are fairly straightforward and can be easily accomplished by following the instructions in the respective referenced material. A certain amount of configuration and effort is required to complete the DM Rollup task. In the next section, we will provide a more detailed explanation of the installation and configuration of the DM Roll up tool.
5.9.5.1 Installing the Distributed Monitoring roll up tool Other than requiring the administrator to manually create the database and tables by running scripts included with this module for Sybase and Oracle, the standard Tivoli patch-style installation is almost fully automatic, and only a simple configuration/instrumentation is needed.
Case study
155
The Tivoli Decision Support for Server Performance Prediction Discovery Guide presents information collected by the systems metrics from the Tivoli Distributed Monitoring collection and archived by the DM Roll-up module. The DM Roll-up module components collect, collate, and store the raw data from the monitors in the database through a RIM connection. The data samples taken as part of Distributed Monitoring data collection will be aggregated and rolled up from each Tivoli Profile subscribed host into the DM Roll-up database by a predefined task. The Tivoli Distributed Monitoring (DM) 3.6.1 product must be installed on the Hub TMR and updated with the Roll up patches. Table 16 tabulates, in sequence, the tasks that need to be implemented in order to successfully install and configure the Server Performance Prediction Roll-up module:
Table 16. DM Roll-up installation steps
Step
Description
Where
Reference
Framework Installation Guide 3.6 and Tivoli Decision Support for Server Performance Prediction guide Release Notes Framework Installation Guide 3.6 and Tivoli Decision Support for Server Performance Prediction guide Release Notes Framework Installation Guide 3.6 and Tivoli Decision Support for Server Performance Prediction guide Release Notes Framework Installation Guide 3.6 and Tivoli Decision Support for Server Performance Prediction guide Release Notes
156
Step
Description
Where
Reference
Framework Installation Guide 3.6 and Tivoli Decision Support for Server Performance Prediction guide Release Notes Task 6: Creating RIM object on page 157 and SPP Release Notes Task 7: Create the SPP database repository on page 158 and Tivoli Decision Support for Server Performance Prediction guide Release Notes Task 8: Setting up the SPP Roll up Tivoli environment on page 159 Task 9: Performing the administration tasks on page 160 Task 10: Scheduling the Roll-up tasks on page 160
Hub TMR
Hub TMR
Desktop of Hub TMR and Spoke TMR Desktop of Hub TMR and Spoke TMR Hub TMR and Spoke TMRs
10
In the following sections we will highlight some of these tasks (starting with task 6) and supply the reader with pertinent deployment information.
Task 6: Creating RIM object The 361-DMN-9C patch creates the RIM object; however, the patch installation options do not allow the user to specify the RIM Host. This patch will, therefore, create the RIM host on the TMR server by default. In our case study, the HUB TMR server and the database server are on the same machine. If the TMR server is not your database server, which is the case with all Spoke TMRs, you will need to delete and re-create the RIM Object. This step is only necessary if the TMR server is not the database server.
Case study
157
Note
The default RIM object name is spr_rim which connects to the dm_db database. The default database user is dm with the password dm_tds. To delete the DM Roll up RIM object, use the wdel command:
# wdel @RIM:spr_rim
To re-create the rim object, use the wcrtrim command. For a detailed explanation of this command, refer to the TME 10 Framework 3.6 Reference Manual, SC31-8434.
# wcrtrim -v Sybase -h rim_host -d dm_db -u dm -H /sybase -s SYBASE spr_rim
After the execution of this command, you will be prompted for the user password.
Note
The RIM host cannot be reset using the wsetrim command. The RIM object has to be deleted and recreated.
Task 7: Create the SPP database repository The Tivoli Decision Support for Server Performance Prediction Guide relies on two Tivoli application databasesL: The Tivoli Distributed Monitoring Roll-up module database and the Tivoli Inventory database.
The Inventory database is optional for the operation of the Server Performance Prediction Guide and supplies additional enterprise hardware data when the customer has this product in their environment. If the Tivoli Inventory database is not available, as is the case with the SDC West environment, we will need to use a set of default files supplied with the TDS Discovery Guide installation in the $TDS/util directory. In order for all five cubes of the Server performance Prediction Guide to be built successfully, these files must be available as the default files or with data when the Inventory database is available and the inventory query can run. Always retain copies of the default versions of the following files: DM_INV_Memory.csv DM_INV_OsType.csv DM_INV_Processor.csv
158
DM_INV_SysByIP.csv Move these files from the $TDS/Util/Tivoli Decision Support for Server Performance Prediction directory into the $TDS/data/export directory. Before the DM TDS Roll up can store the aggregated metrics in an RDBMS, we must create the database or repository. Scripts have been provided to create the database and install the DM_METRICS schema. After a successful installation, the following RDBMS script files will be located in the $BINDIR/TME/SENTRY/TDS/rdbcfg directory of the Hub TMR server: cr_rollup_db.sh rm_rollup_db.sh new_passwd.sh cr_db.ora cr_tbl.ora rm_db.ora cr_db.syb cr_tbl.syb rm_db.syb There are two ways to create your SPP Roll-up database and tables. One is by customizing the SQL templates, such as cr_db.syb and cr_tbl.syb. The other is to run the cr_rollup_db.sh script on the RIM host. It will check the RIM object that was created for the SPP Roll-up to obtain database information, ask you about the size and device information, automatically customize the SQL scripts, and create the required database and tables. We run the cr_rollup_db.sh script on the Hub TMR and follow the simple instructions to create the DM repository.
Task 8: Setting up the SPP Roll up Tivoli environment The Installation process of the Tivoli Distributed Monitoring SPP Roll up Configuration Patch 3.6.1 creates a policy region called TMRHostname_SPR_Region on the Hub TMR Server. Within this region, the SPR_TaskLib Task library and the SPR_ProfileMgr profile manager are created. The SPR_ProfileMgr profile manager contains two profiles: SPR_NtProfile and SPR_UnixProfile, which are configured with prescanned monitors and created during installation.
Case study
159
This TMRHostname_SPR_Region region is not visible on the administrators desktop after installation. It resides as a Top Level Policy Region. You will need to drag and drop this newly-created policy region from the Top Level Policy Region View onto the Administrator Desktop. From the desktop, select the following:
The SPR_ProfileMgr object is created as a dataless Profile Manager and, therefore, we cannot subscribe any other Profile Managers to it. If you have a large number of endpoints, it is convenient to group them into Dataless profiles managers and then make this profile a subscriber of the SPR_ProfileMgr. To do so, the properties of the SPR_ProfileMgr profile manager can be changed to convert it to a database profile manager. The SPR_ProfileMgr profile manager is, by default, assigned as a subscriber to the SPR_DataAggregation job. The TMR Server is, by default, a subscriber to the SPR_RollupIntoDB job.
Task 10: Scheduling the Roll-up tasks Tivoli Distributed Monitoring 3.6.1 has no problem monitoring a large number of servers per TMR server. Since SDC West has deployed the Hub-Spoke architecture, scalability should not be a problem. However, we recommend changing the schedule roll-up tasks differently for each Spoke TMR in order to ensure that all Spoke TMR servers will not be rolling their data up to the database server at the same time. This is more of a database server capacity concern than a TDS concern. The two tasks that should be rescheduled are SPR_DataAggregation and SPR_RollupIntoDB; they are located in the SPR_TaskLib task library.
160
Note
You may choose to schedule the jobs to run at different times using the TME Scheduler. The point to remember is that the data aggregation job, SPR_DataAggregation, must be scheduled to run and finish before the SPR_RollupIntoDB task is scheduled to start.
Step
1
Description
Size the TEC Database.
Where
TEC Database on Sybase server Tivoli Decision Support Discovery Administrator TEC Database on Sybase server
Reference
Tivoli Decision Support for Event Management Release Notes Tivoli Decision Support for Event Management Release Notes Tivoli Decision Support for Event Management Release Notes Tivoli Decision Support Administrator Guide Tivoli Decision Support for Event Management Release Notes Tivoli Decision Support Administrator Guide Tivoli Decision Support for Event Management Release Notes Tivoli Decision Support Administrator Guide Tivoli Decision Support for Event Management Release Notes
Case study
161
Step
Description
Where
Tivoli Decision Support Discovery Administrator Tivoli Decision Support Discovery Administrator Tivoli Decision Support Discovery Administrator Tivoli Decision Support Discovery Administrator
Reference
Tivoli Decision Support Administrator Guide Tivoli Decision Support for Event Management Release Notes Tivoli Decision Support for Event Management Release Notes Tivoli Decision Support for Event Management Release Notes Tivoli Decision Support Administrator Guide Tivoli Decision Support for Event Management Release Notes
10
162
Report
DASD utilization by server. Refer to Figure 46 on page 102 Percent of Availability by server. Refer to Figure 47 on page 103
Alert Summary report. Refer to Figure 48 on page 103 Lotus Notes - MTA server report. Refer to Figure 52 on page 106 Lotus Notes - Sessions per minute report. Refer to Figure 55 on page 108 Lotus Notes - SMTP transferred messages report. Refer to Figure 57 on page 109 Hard Disk and File System utilization report by Operating Systems. Refer to Figure 59 on page 111, and Figure 61 on page 112
5.10.2.1 Network Event Analysis Guide While the Network Event Performance Discovery Guide helps control the ever-growing management to do list, which network events create, this guide will track key performance indicators of your network by collecting, aggregating, and analyzing all the event traffic that NetView traps and can correlate. With the information discovered using this guide, you are able to
Case study
163
make meaningful decisions about improving your network performance and identifying problems before they go out of control. The following topics are provided by the NetView Network Event Analysis Discovery Guide: How is the event rate (by smartset) affecting the network? How is the event rate affecting the network?
5.10.2.2 Network Segment Performance Guide The Network Segment Performance Discovery Guide provides the overall health of a segment for any period of time. It allows you to analyze collisions, broadcasts, multicasts, and performance locations within the network. This guide will also allow you to view information on network segment statistics. With the information discovered, you can identify performance problems and their causes.
The following topics are provided by the NetView Network Segment Performance Discovery Guide: How are errors affecting the network? How is Multicast and Broadcast traffic affecting the network? What is my network traffic pattern? What are the main failure points?
5.10.2.3 TDS for Information Management Guide This guide enables you to analyze the problem and change management information stored in your Tivoli Service Desk for OS/390 (formerly TME 10 Information/Management) host databases. This guide is organized into two categories - problem management and change management - to present the most typically sought-after information related to your service desk activities. This guide allows you to focus on activities and problems occurring within the service desk arena. The Information Management guide tracks the most common aspects encountered by a service desk including:
Total time spent resolving problems Open problem volume by type Distribution of problems by severity Activity distribution by associated changes Activities coming up in the next six months Estimated duration of upcoming activities
164
5.10.2.4 Call Center Management Decision Support Guide By limiting your focus to this area and viewing only the relevant data, a call center management analysis will help you determine how effective, efficient, and profitable your support center is. This Tivoli Decision Support Guide presents the most typical data a support center collects including the number of interactions, the first-call resolution rate, the elapsed time of interaction, and time to resolution. 5.10.2.5 Relationship Management Decision Support Guide This guide highlights the relationship between the organization and its customers by focusing on how well requests are being resolved and the overall health of the relationship. By viewing data from the perspective of the request life cycle, you can identify obstacles or deviations from the most efficient service process. Working with key business indicators, such as Service Level Agreement (SLA) compliance and the number of call transfers, you are better able to understand how your customer perceives your service. 5.10.2.6 Knowledge Assessment Decision Support Guide With the knowledge assessment Decision Support guide, you can begin to understand what solutions and diagnostic aids are working best to help you manage your investment in knowledge. By looking at the Knowledge Engineering category, you can get a better idea of how well you are using diagnostics and what knowledge is most effective. Indicators to explore include the number of requests resolved with diagnostics and the number of solutions. 5.10.2.7 Service-level Management Decision Support Guide By treating your support commitments as a focal point for further exploration, the guide for service-level management helps you ascertain how well you are operating within budgeted guidelines and performing against the SLAs you have established. For example, the Support Commitments category compares your customers' expectations with how well your organization is meeting them. 5.10.2.8 Asset Management Decision Support Guide With the asset management Decision Support guide, you can get a better understanding of the assets your organization has deployed and their associated cost structures. This guide presents the most typical data an asset manager needs but, historically, does not have easy access to. This data includes purchasing trends, yearly trend for asset acquisition, percent of assets under lease or contract, asset base cost, as well as which network hubs have the most connections.
Case study
165
5.10.2.9 Change Management Decision Support Guide If there is one thing that is constant, it is change, and in a dynamic work environment, the change management Decision Support guide can help you get a handle on corporate changes and give you the data you need to make solid management decisions. Focusing on how changes affect your bottom line, this guide includes:
Labor cost groupings What changes are completed, overdue, and upcoming Estimated labor costs Percent of changes over cost estimate Request submission trend Task distribution
166
167
168
169
Which systems have a high memory page scan rate? Which mail systems have high network utilization? What is the average forecast mail delivery time? The analysis of the views and information that these questions generate will allow the system analyst to identify whether there are any response- or workload-related problems with the mail servers. The systems analyst needs to do the following: Find out why the response from the Lotus Notes mail servers is poor Analyze the information Consider solution options Present the proposed technical solution to the IT Manager for a final decision on any changes or technology investment that may be necessary to resolve the problem. To begin the decision process, the system analyst will use the Tivoli Decision Support Discovery Interface, select both the Server Performance Prediction Discovery Guide and the Domino Management Discovery Guide, and choose the role of systems analyst.
6.3.1.1 Which mail systems have CPU workload problems? After selecting the All System Metrics report, we will filter the information to see only the Lotus Notes Servers. The resulting graph, as shown in Figure 98 on page 171, displays the Lotus Notes server performance metrics by CPU utilization. This view shows us the monthly average percentage CPU busy time of the Lotus Notes Mail servers: nickel, desdemona, cypress, and burnet. The graph shows several CPU metrics from which it is clear that the server nickel has high average CPU percentage busy, system time, and user time utilization. This could be an indication that the server has performance-related problems. We can also understand that all the other servers are under less stress.
170
6.3.1.2 Which systems have a high average CPU run length cue? We will find the answer to this question by selecting the Server Performance Prediction Discovery Guide and then selecting Busiest Systems report. In this view, as shown in Figure 99 on page 172, we can look at the busiest systems based on the average daily run queue length metric for each system. The run queue length metric is the number of processes that are ready to run (processes not waiting for Input/Output or user input) that the system cannot dispatch until it has free processor cycles. From the graph, we can see that the server nickel has a high run queue length. This a key metric for determining processor load and is measured in average number of waiting processes. The reports also show us that the other servers have average to normal workload characteristics.
171
Figure 99. Lotus Notes Mail Servers daily average run length cue
6.3.1.3 Which mail systems have high memory utilization? Using the All System Metrics report in the Server Performance Prediction Discovery Guide and filtering on By Physical Memory, we can see, as shown in Figure 100 on page 173, the busiest systems based on the hourly average run queue length metric for each system. The report shows us the memory utilization for the Lotus Notes servers, and we can find out that the server nickel has a high utilization and that the usage for all the other servers is moderate to low.
172
6.3.1.4 Which systems have a high memory page scan rate? By selecting the Systems That Need More Memory report from the Server Performance Prediction Discovery Guide, the system analyst can drill down and retrieve information from all Lotus Notes Servers. Figure 101 on page 174 highlights systems with physical memory from 32 MB up to 64 MB where the page scan rate is exceptionally high.
The page-scan rate is presented in terms of pages scanned per second. In order to evaluate this metric, you need to take into account the amount of physical memory on the system. It is known that the server nickel has 64 MB of physical memory installed. A scan rate of 1000 pages/second is considered very high on a system with 64 MB of physical memory but not on one with 256 MB of physical memory. We can also see that the server nickel has a scan rate of nearly 1000 pages per second. This can be regarded as high for this amount of memory and will need to be corrected.
173
Figure 101. Lotus Notes mail servers that need more memory
6.3.1.5 Which mail systems have high network activity? By selecting All System Metrics from the Server Performance Prediction Discovery Guide, the system analyst can drill down and retrieve information on all Lotus Notes Servers. By filtering on network utilization the network activity is displayed. Figure 102 on page 175 highlights the systems with high network activity. It can be seen that the servers desdemona, cypress, and burnet have relatively low network utilization while that of nickel is high. Previously, we found that nickel had a high CPU utilization; this, coupled with high network activity, is an indication of an under-provisioned system.
174
6.3.1.6 What is the average forecast mail delivery time? From the Domino Management Discovery Guide and the When might servers begin experiencing problems report, the system analyst can filter by mail server and then by the Mail.AverageDeliveryTime measure. Figure 103 on page 176 shows the mail average and peak delivery time forecast of the server nickel. The forecast highlights that, for the next 30, 60, and 90 days, the averages and peaks of mail delivery time are increasing. In addition, since, according to the SLA, all mail deliveries must be within 20 seconds, nickel will exceed the SLA within 30 days.
175
Figure 103. Lotus Notes mail server - forecasted average mail delivery time
6.3.1.7 The system analysts conclusions and suggestions Based on the results of the information gathered earlier in this section, the system analyst will deliver a report to the IT Manager addressing the cause of the problem and deciding on a course of action.
The following are conclusions that can be drawn from the discovery of the network: The servers desdemona, cypress, and burnet are operating within normal parameters and are attending the SLA. The server nickel is overloaded and under-provisioned. It means that the CPU is inadequate for the workload. The Lotus Notes mail service is currently operating at capacity, and the response problem only affects the customers attended by the server nickel, which is overloaded. The forecast to compromise the SLA is at least within 30 days since the workload on server nickel is increasing.
176
The system analyst makes the following recommendations to resolve the problem: Add or upgrade the CPU to server nickel This will relieve the problem in the short term but does not address the underlying problem of nickel being overloaded. Increase the amount of physical memory in nickel to 128 MB. This will solve the problem in the medium term but still does not resolve the fact that nickel is overloaded. Redistribute the workload across the other servers This will offer a longer term solution but might be disruptive to the organization.
6.3.2.1 Which systems are over- or under-provisioned? By selecting How might I improve performance on my systems report? from the Server Performance Prediction Discovery Guide, the IT Manager can drill down and retrieve information from all Lotus Notes Servers. Figure 104 on page 178 highlights all Lotus Notes servers that are either under- or over-provisioned.
If you have a system that shows very high CPU utilization but has relatively low network activity, we can say that the CPU is inadequate for the workload, that is, the system is under-provisioned. If you have a system that shows low CPU utilization but has relatively high network activity, we can say that the CPU is excessive for the workload, that is, the system is over-provisioned.
177
The measure used for this view is the processor overload. This is expressed as a percentage of the difference between the CPU and network utilization divided by the network utilization. In this case, the report shows that the server nickel has high CPU utilization as well as relatively high network activity; this can be interpreted as being under-provisioned. It can also be noted that the other servers are over-provisioned.
6.3.2.2 Where are the performance anomalies? By selecting How might I improve performance on my systems report? from the Server Performance Prediction Discovery Guide, the IT Manager can drill down and retrieve information from all Lotus Notes Servers. Figure 105 on page 179 highlights the situation for all Lotus Notes servers.
This view can be useful in detecting areas where one or more of the systems is not performing as expected. The logic behind this view is that, for systems having the same purpose and the same hardware configuration, processor utilization should be proportional to the network activity on the system. If it is not, then there is probably something amiss with one of the systems.
178
In this report, we can see that the server nickel does not have the same behavior as the other servers and that it has much higher average statistics.
6.3.2.3 What performance problems are on the horizon? By selecting What Performance Problems are on The Horizon? from the Server Performance Prediction Discovery Guide, the IT Manager can select the Systems most quickly approaching critical thresholds report and drill down in order to retrieve information from all Lotus Notes Servers. Figure 106 on page 180 highlights all Lotus Notes servers that are predicted to hit a critical performance threshold within the next 180 days. A critical situation is highlighted in red for server nickel.
179
6.3.2.4 IT manager conclusions The IT managers role here is to look at the broader issue of managing the network in terms of meeting SLAs and providing a scalable cost-effective solution. Based on the results of the information gathered earlier in this section, the IT Manager will deliver a detailed proposed-solution report to the Chief Executive Officer.
It is clear from the analysis of the reports that the server nickel is overloaded and that there is an uneven distribution of workload among the Lotus Notes mail servers. It has also become apparent that there is no formal process to add users and services to the mail servers. After analyzing the reports, the IT manager and the system analyst decide on the following solution: Since the enterprise has an under-provisioned server, it is necessary to redistribute the workload among all Lotus Notes servers, thus, providing a longer term solution while maximizing the capabilities of the network. It must be done within 10 days because the server nickel will soon compromise the SLA. To implement a process to add users and services to the Lotus Notes mail servers will include the use of Tivoli Decision Support to identify which
180
servers are best able to handle the extra workload. This will allow the IT manager to leverage the existing IT infrastructure and get a maximum return on the investment. Make budgetary provisions for memory and CPU upgrades for all the Lotus Notes servers. The trends from TDS show that there will be significant growth in workload and users. Note that TDS has given the manager the power to predict when systems will reach their critical thresholds and, therefore, can plan and budget well in advance in order to maintain SLAs. Finally, the manager must present his proposals and solutions to the CEO.
6.3.3.1 What are the performance trends? By selecting Is my resource utilization growing? from the Server Performance Prediction Discovery Guide, the CEO can select the Daily average performance trend report and drill down in order to retrieve information from all Lotus Notes Servers.
Figure 107 on page 182 highlights the growth trend for critical system-performance metrics over the last four weeks. It looks at the average value on a day-by-day basis. This report is useful for spotting growth trends and changing patterns in resource utilization.
181
We can see that the growth trend for the Notes server nickel is out of proportion to the rest of the Lotus Notes servers.
6.3.3.2 The CEOs conclusions Now, the problem and the solution proposed by the IT Manager are clear to the CEO. After considering all the information, the CEO approves the project.
Based on the information provided by the IT Manager, the CEO approves the project to redistribute the workload among all Lotus Notes servers. It must be done within 10 days in order to keep the SLA. The CEO also asks the IT Manager to prepare a detailed process to add services to the Lotus Notes Servers in order to better use the actual IT infrastructure and the investment that had been made in the Lotus Notes Mail service. Based on the information gathered from TDS, the CEO finally prepares a report requesting the budgetary provision for memory and CPU for all the Lotus Notes servers. The CEO now has a detailed description of the problem and is armed with confidence, answers, and solutions to present to the Executive Committee.
182
6.4 Conclusion
In this simple scenario, we have attempted to show the power and diversity of TDS. By choosing different roles and following their decision-making strategies to resolve a problem, we can see how each decision maker has a part in the final decision. Each role player needs different information from the same data; the analyst needs to know the cause of the problem; the manager needs to understand the impact of the solution on the network as a whole, and the CEO needs to know what are the benefits to the organization. It would be almost impossible to show all the diverse roles and information that TDS can produce. Knowledge can help shape an organizations thinking and its approach to its customers and service issues. As you have seen in this scenario, even a brief and basic discovery strategy can yield surprising results and help IT professionals make better business decisions.
183
184
185
186
Even though the support plan has been designed for Indianapolis to handle core issues (Cognos, basic functionality, and so on), some cases may be handled in Austin or Raleigh, and, therefore, an understanding of the core application is necessary. For related product support: 1. Cognos Support - For now, contact Indianapolis. 2. Seagate Support - oemcrw@img.seagatesoftware.com
187
188
General Availability
Currently available on 2nd Quarter 1999 Currently available on 3rd Quarter 1999 Currently available on 3rd Quarter 1999 Currently available on 3rd Quarter 1999 Currently available on 3rd Quarter 1999 Currently available on 3rd Quarter 1999 Currently available on 3rd Quarter 1999 3rd Quarter 1999 4th Quarter 1999 4th Quarter 1999 4th Quarter 1999 Currently available on 3rd Quarter 1999
189
General Availability
Shipped with TDS. Currently available on Tivoli Decision Support version 2.0 Shipped with TDS. Currently available on Tivoli Decision Support version 2.0 Shipped with TDS. Currently available on Tivoli Decision Support version 2.0 Shipped with TDS. Currently available on Tivoli Decision Support version 2.0 Shipped with TDS. Currently available on Tivoli Decision Support version 2.0 Shipped with TDS. Currently available on Tivoli Decision Support version 2.0 Shipped with TDS. Currently available on Tivoli Decision Support version 2.0
Note
Tivoli frequently announces the availability of new Guides. For the latest information on Guide availability, refer to Tivolis Web page:
http://www.tivoli.com
190
191
been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment. Reference to PTF numbers that have not been released through the normal distribution process does not imply general availability. The purpose of including these reference numbers is to alert IBM customers to specific information relative to the implementation of the PTF when it becomes available to each customer according to the normal IBM PTF distribution process. The following terms are trademarks of the International Business Machines Corporation in the United States and/or other countries:
AIX AT Home Director MQ Netfinity OS/2 RS/6000 System/390 Tivoli Decision Support AS/400 DB2 IBM MQSeries NetView OS/390 S/390 Tivoli Tivoli Decision Support Discovery Guide
The following terms are trademarks of other companies: Cognos, the Cognos logo, Impromptu, PowerPlay, PowerCube, and Scenario are trademarks of Cognos Inc. Oracle is a trademark of Oracle Inc. Sybase is a trademark of Sybase Inc. Crystal Reports is a trademark of Seagate Software Inc. Tivoli Service Desk is a trademark of Software Artistry, a Division of Tivoli.
192
C-bus is a trademark of Corollary, Inc. in the United States and/or other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and/or other countries. Microsoft, Windows, Windows 95, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States and/or other countries. PC Direct is a trademark of Ziff Communications Company in the United States and/or other countries and is used by IBM Corporation under license. ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States and/or other countries. (For a complete list of Intel trademarks, see www.intel.com/tradmarx.htm) UNIX is a registered trademark in the United States and/or other countries licensed exclusively through X/Open Company Limited. SET and the SET logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others.
Special notices
193
194
195
196
e-mail address usib6fpl@ibmmail.com Contact information is in the How to Order section at this site: http://www.elink.ibmlink.ibm.com/pbl/pbl/
Fax Orders United States (toll free) Canada Outside North America 1-800-445-9269 1-403-267-4455 Fax phone number is in the How to Order section at this site: http://www.elink.ibmlink.ibm.com/pbl/pbl/
This information was current at the time of publication, but is continually subject to change. The latest information may be found at the redbooks Web site.
197
First name Company Address City Telephone number Invoice to customer number Credit card number
Last name
Card issued to
Signature
We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not available in all countries. Signature mandatory for credit card payment.
198
List of abbreviations
CEO DM DMP DSM DSS DSM IBM ITSO IT MLM NCO ODBC OLAP OS RDBMS RIM SDC SLA SNMP SPP SQL Chief Executive Officer Decision Maker Decision Making Process Distributed Systems Management Decision Support Systems Distributed Systems Management International Business Machines Corporation International Technical Support Organization Information Technology Mid-level Manager Network Computing Offerings Open Database Connectivity Online Analytical Processing Operating System Relational Database Management System RDBMS Interface Module Service Delivery Center Service level Agreement Simple Network Management Protocol Server Performance Prediction Structured Query Language SRM TDS TEC TIM TMA TMR TPS TSD Server Resource Management Tivoli Decision Support Tivoli Enterprise Console Tivoli Implementation Methodology Tivoli Management Agent Tivoli Management Region Tivoli Professional Service Tivoli Service Desk
199
200
Index A
administering 70 administration 70 administrator 68 administrator system 14 AIX 102 AIX performance 104 AIX reports 109 alert log 102 algorithms 13, 61 Analysis 96 analytical decision 75 answers to questions 9 application 10 architecture 30 availability 1 CEO 169 CEO conclusions 182 CEOs discovery process 181 charts 2 Chief Executive Officer 168 Chief Executive Officer role 169 Cisco routers 138 Client Database 14 Cognos 12 Cognos PowerPlay 10, 12, 14, 62, 143 Cognos support 187 Cognos Transformer 16, 67 collection 96 comma separated values 129 CPU 99 CPU utilization 97 critical data 167 critical network 137 critical performance 179 critical system 181 critical thresholds 179 Crystal Report files 69 Crystal Reports 10, 12, 62 cube build times 67 Cube building problems 81 Cube building process 64 step 1 65 step 2 66 step 3 66 step 4 67 cube file 16 cubes 68 Cubes definition 16 cubes.mdb 61, 65 Customer confidence 185 customer profile 185 Customer reporting requirements 92 customer requirements questionnaire 27 customer satisfaction 9 customization requirements 33
B
bandwidth 65 batch reports 2 best practices 23, 185 better decisions 6 Boulder 88 Brazil 6 broader vision 3 budgetary provision 181 business data 13 Business Decision Information 7 business decisions 183 business hours 12 business indicators 13 business information 78 Business Intelligence 1 defined 3 business knowledge 12 business leaders 27 business models 13 business operation 9
C
calculations 12 Categories 12, 69 categories 62 Category definition 16 central repository 10
D
data elements 13 Database Administrator 45 Database Server Information 38 dataless profile manager 160 DB2 16
201
Decision 1, 5 Decision Makers 4, 6, 61, 73, 78 decision making 9 decision making process 1, 5 decision process 170 Decision Support Guides 32 Decision Support Systems 1, 4, 5 delivering services 9 delivery mechanisms 10 Deployment Phase 47 deployment guide 48 input and output components 47 process flow 48 Training 52 Deployment phase advanced configuration and customization 51 deployment preparation 49 product configuration 51 product installation 50 deployment strategy 24 detailed design 30 Dicing definition 18 Dimension Line definition 17 dimension members 16 dimensions 9 Dimensions definition 17 discovering key information 13 Discovery Administrator 10, 11 Discovery Administrator PC Information 37 Discovery Interface 10, 12 Discovery Interface PC Information 38 discovery process 169 Distributed Systems Management 93 distributed systems management 3 dive 9 DM 4 DM Roll Up patches 156 DM Roll Up Toll installation steps 156 DM Roll Up Tool 155 DMP 5 Documentation Phase 54 input and output components 55 process flow 55 Domino Roll Up module 129 Drill Down 72 Drill Through 80 Drill Through databases 66 Drill-Down 17, 33, 61 Drill-Down definition 17
Drill-Through definition 17 DrillThru.mdb 62, 66, 69 Drill-Up 17 Drill-Up definition 17 DSM 93 DSS 4 characteristics 5 definition 5
E
e-Business 2, 3 ed.mdb 62, 69 EDAdmin.log 82 effectiveness 9 effects 12 efficiency 9, 13 endpoints 61, 74, 77, 160 End-to-End service delivery 3 End-to-End solutions 1 end-user solution 13 enhanced quality 116 enterprise 2 Enterprise Console 29 Enterprise solution 23 enterprises data sources 21 enterprises databases 19 enterprises operation 12 enterprises service 13 Event Management 125 Event Management Discovery Guide deployment 161 Evolution to Business Intelligence 2 Executive Committee 168 existing 96 existing IT infrastructure 181 existing policies 28 existing reports 51 existing Tivoli Systems Management solution 28 Explorers 21
F
Fail-over analysis 57 fast access to information 2 File server 14 File Server Information 37 files 10 Filter definition 17 forecasts 20
202
formal process 180 function of an organization 9 functional elements 53 functionality and limitations 42 functionality testing 53 functions and capabilities 31 functions for TDS 37
gateways 74, 77 generating views 12 GPFs 82 guide the enterprise 3 Guides 10, 13
In-House reports 97 installation method 70 integration process 60 interactive business indicators 6 Intersolv 14 interview 27 intranet 10 investments 168 isql 81 IT 1 IT manager 9, 168 IT Manager conclusions 180 IT manager discovery process 177 IT manager role 169 IT technical leader 27
H
hardware requirements 145 help desk operation 9 high availability environment 88 high average CPU 169 high CPU utilization 124 high level 9 high level physical design 141 high level report 6 high memory page scan rate 170 high memory utilization 169 high network activity 124 high run queue length 171 high success 21 high-level task flow 87 highly customizable 21 hints 13 historical records 20 Hub 87 Hub TMR 87 Hub-Spoke 87
J
Job schedule 153 Jobs 148
K
key business 13 keyword searches 13 kickoff 33 knowledge 183 knowledge discovery 9
L
LAN 80 Layer definition 17 Leader 44 levels of details 9 local clients 142 local network 141 local TDS file server 143 log entries 103 log space 90 Logical Design 33
I
IBM 85 IBM Global Services 85 IBM SDC West 93 IBM SDC-West Environment 85 IBM Service Delivery Center 8 impact on the response times 80 Information Technology 1 Informix 16 In-House 94
M
managed nodes 160 management gateways 61 management tasks 20 management team 45 management tools 3 manipulate 60 mapping TDS Discovery Guides 33
203
Measures definition 17 methodology 23 methods 13 Microsoft Access databases 69 migrating 160 migration 85 miss the SLA 126 Modelling 113 Models 12 Models definition 18 multi-dimensional analysis 6, 12 multi-dimensional array 16 multi-dimensional Cubes 61, 80 multi-dimensional space 18 Multiple TMR Environment 77
organization's experience 23 organization's methodology 52 output documentation 34 over provisioned 124, 177 overall management 45 overview of the Customer 56 overview of TIM 24 Overview of Tivoli Decision Support 9
P
patches 42 performance 13 permissions 82 perspectives 9 Physical Design 33 policy region 159, 160 populates 11 PowerPlay 12, 16 PowerPlay report files 68 predictive analysis 3 preparation for deployment 43 primary TDS file server 142 printouts 10 pro-active 5 problem management 72 procedure for deploying TDS 144 Profile definition 18 profile manager 159, 160 profiles 159 profitability 13 Project Analysis 41 project Leader 44 Project Plan 26 Project plan 41 Project Planning Phase 40 input and output components 40 process flow 41 Project Task Plan 42 Project Team 26, 44 projections 20 proposal for TDS 30 proposals 181 push content 10 push-delivery 22
N
NCO 94 Netfinity Capacity Manager 96 Netfinity Capacity Manager. 95 NetView MLM 102 Network 39 network administrator 45 network analyst 9 network bandwidth 63 Network Computing Offerings 94 Network Management 3 network mode 14, 34 network traffic 64, 67 network-wise 89 NotesView 96
O
ODBC connection 61, 69 ODBC connectivity errors 81 ODBC data source 63 ODBC drive 14 OLAP 6, 167 OLE link 83 One-Minute Managers 21 On-Line Analytical Processing 167 On-line Analytical Processing 6 open calls 21 Operations 89 optimal 88 Optimization 49 optimization of server 113 Oracle 14, 16
Q
qualifier 81 quality 116
204
quality of collection 113 queries 12, 13 querying 70 quickly approaching thresholds 179
R
rapidly emerging 167 raw data 65 RDBMS 74, 143 realistic deployment 71 reducing IBM IT reporting costs related view definition 18 related views 13, 18, 62, 69 relationships 12 remote sites 74 replication 80 report generation 70 reporting requirements overview Reports Analyst 46 repository 12 Requirements Gathering Phase input and output components process flow 26 resolutions 9 resource allocation 20 resource availability 35 resource requirements 34 responsiveness 72 RIM Host 157 RIM host 87 RIM Object 157 Role definition 18 Rover 82 rover utility 81 Rover window 82
86
27
25 26
S
San Jose 89 Santa Teresa 89 scalability 160 scheduling the Jobs 152 scopes 9 SDC 85 SDC-West 85 Seagate 12 Seagate support 187 secondary TDS file server 142 Selection Criteria definition 18
Server Performance Prediction 117 Server Performance Prediction deployment steps 154 Server Resource Management 94, 96 Service Delivery Center 85 Service Delivery Center architecture 88 Service Level Agreement 168 service level agreement management 3 Service Level Agreements 1 service management 3 severity level 12 shared drive 15 Single TMR Environment 74 SLA 1, 9, 168 Slicing definition 18 snapshot-style views 20 Software Engineering Life Cycle Model 24 Spoke 87 Spoke TMR 87 Spot trends 20 SPP 117 SQL 61, 82 SQL queries 61 SQL Server 16 SQLplus 81 SRM 94, 96 SRM Reports 104 Stand-alone mode 14 stand-alone mode 34 support centers 6 support process 187 surprising results 183 Sybase 14, 16 System Administrator 45 System Analyst conclusions 176 System Analyst discovery process 169 System Analyst role 168 System Architecture and Design document 26, 33 Systems Analysis Phase 30 input and output components 31 process flow 31 Systems analysis phase preparation 31 Systems Analyst 168 Systems Management 3
T
target market 185
205
Task library 148, 159 Tasks 148 TDS 5 TDS Discovery Guides 143 TEC 74 technical proposal 26 technical solution 31 templates 12 Testing Phase 52 input and output components 52 process flow 52 the challenge 4 third-party application 12 TIM 23, 25, 185 Tivoli certified consultants 46 Tivoli commands wcrtjob 150, 152 wcrtrim 158 wcrttask 149, 151 wdel 158 wschedjob 153 Tivoli Decision Support 3, 5, 6, 7 architecture 34 Base Product 10 client component 62 components 14, 61 components integration 63 concepts 16 Discovery Administrator 11, 62, 141 Discovery Guides 10, 12 Discovery Guides availability 189 Discovery Interface 12, 14, 68, 143 environment 61 File Server 12, 61 , 142 File Server deployment steps 146 functionality 19 functionality diagram 60 functions 70 goal 9 Implementation Modes 14 implementations 74 into the Decision Making Process 5 network mode 71 overview 9 Product Components 10 resource mapping 36 Server Component 10, 12 solution objectives 42 stand-alone mode 70
support process 187 Supported Platforms 15 terminology 16 users 21 Tivoli Decision Support methodology 23 Deployment phase 47 Documentation phase 54 Project planning phase 40 Requirements gathering phase 25 Systems analysis phase 30 Testing phase 52 Tivoli Discovery Guides Call Center Management 165 Change Management 166 Domino Management 128 Event Management 125 Information Management 164 Knowledge Assessment 165 mapping 114 Network Element Performance Prediction 137 Network Event Performance 163 Network Segment Performance 164 Relationship Management 165 Server Performance Prediction 116 Service Level Management 165 Tivoli Distributed Monitoring 7, 60, 61, 95 Tivoli Distributed Monitoring object relationships 92 Tivoli Enterprise Console 7, 60, 95 Tivoli Enterprise Environment 60 Tivoli Enterprise solution 7 Tivoli Implementation Methodology 23, 185 Tivoli infrastructure 75 Tivoli Inventory 7, 60 Tivoli Management Agents 61 Tivoli Management Server 61 Tivoli NetView 60, 95 Tivoli Servers 74 Tivoli Service Desk 7, 72 Tivoli Software Distribution 60 Tivoli System Administrators 45 Tivoli Tier 1 Servers 77 Tivoli Tier 2 Servers 77 TMR 74, 77 top level policy region 160 topic 62 Topic definition 18 Topic Map 168 Topic Map definition 19 Tourists 21
206
Trainer 46 Training plan 46 transactional data 6 Transformer 16 Transmission 96 trends 12 Troubleshooting TDS 81 Tucson 94 Typical Architecture 35
Z
Zperstat 96
U
under provisioned 124, 174 UNIX commands df 95, 102 lsps 95, 100 netstat 95 ps 95, 100 vmstat 95, 99 user permissions 45 user-friendly 21
V
version 42 View definition 19 view hint description 62, 69 viewing Crystal reports 69 viewing multidimensional reports 68 views 13 vision 3
W
WAN 80 Web Administrator 46 Web Preparation 96 what if questions 1 Windows 95 15 Windows NT 4.0 15 Windows NT performance 104 workload 63 workload characteristics 171 Workload Characterization 113 Workshop summary 46 Workshops 46
Y
Year 2000 189
207
208
_ IBM employee
Please rate your overall satisfaction with this book using the scale: (1 = very good, 2 = good, 3 = average, 4 = poor, 5 = very poor)
Overall Satisfaction __________
Comments/Suggestions:
209
SG24-5499-00