You are on page 1of 420

NOKIA SIEMENS NETWORKS S.A.

SPOTS V14.0

Installation Guide

January / 2012

E200613-01-115-V14.0I-34
 NOKIA SIEMENS NETWORKS, S.A.
COO OBS SM RD Report.Dev. Bus. & DWH PT

R. Irmãos Siemens, nº 1
2720-093 Amadora
Portugal

All rights reserved. No part of this document may be reproduced or transmitted


in any form or by any means, electronic or mechanical, including photocopying
and recording, for any purpose other than the purchaser’s personal use without
the written permission of Nokia Siemens Networks S.A.
This document consists of a total of 420 pages (14 Annexes included).
The information contained in this document is subject to change.
Installation Guide (SPOTS V14.0) Nokia Siemens Networks S.A

Table of Contents

1 INTRODUCTION ........................................................................................................................................ 10
1.1 Scope.................................................................................................................................................................. 11
1.2 Installation Distribution Media for SPOTS V14 (DVDs) ............................................................................. 12
1.3 SPOTS documentation.................................................................................................................................... 13
1.4 Target group and structure of this manual ................................................................................................... 14
1.4.1 Document Conventions ......................................................................................................................... 15
2 GENERAL OVERVIEW .............................................................................................................................. 16
2.1 Deployment of SPOTS components ............................................................................................................. 17
2.2 Technology Plug-Ins ........................................................................................................................................ 18
2.3 Platform Hardware & Standard Software ..................................................................................................... 19
2.3.1 SPOTS PMS, PMC and RTA (Solaris environment) .................................................................... 19
2.3.1.1 Hardware ................................................................................................................................... 19
2.3.1.2 Single Server Environment ........................................................................................................ 20
2.3.1.3 Distributed Environment Large ................................................................................................. 24
2.3.1.4 Hardware Configurations & Database Installation Types.......................................................... 28
2.3.1.5 Customized Configurations ....................................................................................................... 29
2.3.1.6 Additional Items ........................................................................................................................ 29
2.3.1.7 Standard Software ...................................................................................................................... 29
2.3.1.8 SPOTS Software ........................................................................................................................ 29
2.3.2 SPOTS PMC Windows environment ............................................................................................... 30
2.4 Hard Disk Partitioning ...................................................................................................................................... 31
2.4.1 Standard Configurations ........................................................................................................................ 32
2.4.1.1 Single Server Configurations ..................................................................................................... 32
2.4.1.1.1 Small A (2x146GB) Configuration .......................................................................................... 32
2.4.1.1.2 Small B (8x73GB / 146GB) Configuration .............................................................................. 33
2.4.1.1.3 Small C (2x146GB) Configuration........................................................................................... 35
2.4.1.1.4 Small D (4x146GB) Configuration .......................................................................................... 36
2.4.1.1.5 Medium A (8x73GB / 146GB) Configuration .......................................................................... 37
2.4.1.1.6 Medium B (2x146GB) Configuration ...................................................................................... 40
2.4.1.1.7 Medium C (4x146GB) Configuration ...................................................................................... 42
2.4.1.1.8 Medium D (4x146GB) Configuration ...................................................................................... 44
2.4.1.2 Distributed Configurations ........................................................................................................ 46
2.4.1.2.1 Large Configuration – DB Server A......................................................................................... 46
2.4.1.2.2 Large Configuration – DB Server B ......................................................................................... 48
2.4.1.2.3 Large Configuration – DB Server C ......................................................................................... 49
2.4.1.2.4 Large Configuration – DB Server D......................................................................................... 51
2.4.1.2.5 Large Configuration – Application Server A ........................................................................... 53
2.4.1.2.6 Large Configuration – Application Server B............................................................................ 53
2.4.2 Legacy Configurations ........................................................................................................................... 54
2.4.2.1 Single Server Configurations ..................................................................................................... 54
2.4.2.1.1 Legacy Small A1 or Small B1 (3x73GB) Configuration.......................................................... 54
2.4.2.1.2 Legacy Small B2 (4x73GB) Configuration .............................................................................. 55
2.4.2.1.3 Legacy Medium B1 (2+2x73GB) Configuration...................................................................... 56
2.4.2.2 Legacy Distributed Configurations ............................................................................................ 59
2.4.2.2.1 Legacy Large B1 Configuration – DB Server .......................................................................... 59
2.4.2.2.2 Large Configuration – Application Server ............................................................................... 60

3 INSTALLATION PROCEDURE OVERVIEW .................................................................................................. 62


3.1 Preparing for the SPOTS Installation............................................................................................................ 62
3.1.1 Obtaining SPOTS Licenses .................................................................................................................. 62
3.1.2 Need for Upgrade or Data Migration ................................................................................................... 62
3.1.2.1 Upgrade overview ...................................................................................................................... 62
3.1.2.1.1 Upgrade from V12 to V14 ........................................................................................................ 62
3.1.2.1.2 Upgrade from V13 to V14 ........................................................................................................ 63
3.1.2.2 Upgrade using existing HW ....................................................................................................... 63
3.1.2.3 Upgrade using new HW ............................................................................................................. 63
3.1.2.4 Data Migration ........................................................................................................................... 63
3.1.3 Consulting Product Release Notes ..................................................................................................... 64

E200613-01-115-V14.0I-34 i
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

3.1.4 Collecting Information ............................................................................................................................ 64


3.2 Installation Tasks .............................................................................................................................................. 64
3.2.1 Initial Installation of a SPOTS V14 System ........................................................................................ 65
3.2.2 Upgrade on Existing Hardware from SPOTS V12 or V13 System ................................................ 69
3.2.3 Migrating to New Hardware from SPOTS V12 or V13 System ...................................................... 86
3.2.4 SPOTS V14 Software Upgrade............................................................................................................ 90
3.2.5 SPOTS V14 Hardware Upgrade .......................................................................................................... 92
3.2.5.1 Backup User Parameters ( Both AS & DS ) .............................................................................. 93
3.2.5.2 Provide External Repository for V14 Export ( Both AS & DS ) ............................................... 93
3.2.5.3 Export V14 Data ( Only DS ) .................................................................................................... 93
3.2.5.4 Reinstall Spots V14 System ....................................................................................................... 99
3.2.5.5 Import V14 system Old Data ( Only DS ) ............................................................................... 100
3.2.5.6 Restore User Parameters ( Both AS & DS ) ............................................................................ 103
3.2.5.7 Upgrade SPOTS TPs ( Only DS ) ............................................................................................ 104
3.2.5.8 Install New SPOTS TPs ( Both AS & DS ) ............................................................................. 104
3.2.5.9 Install Virtual X Server ( Both AS & DS ) .............................................................................. 104
3.2.5.10 Reboot the System ( Both AS & DS )...................................................................................... 105
3.2.6 Installation of Oracle Instant Client in Application Server (AS) machine .................................... 106
3.2.7 Procedures for SPOTS Systems with BAR or Autochanger Tape Device ................................. 107
3.2.7.1 System with SPOTS-BAR and Autochanger Tape Device ...................................................... 107
3.2.7.2 System only with an Autochanger Tape Device ...................................................................... 108
3.2.8 Backup user parameters from an existing SPOTS system ........................................................... 109
3.2.8.1 Backup user-defined configuration parameters and files ......................................................... 109
3.2.8.2 Gather information of existing SPOTS System Users ............................................................. 109
3.2.8.3 Long-Term Files ...................................................................................................................... 109
3.2.8.4 Real-Time Configuration Parameters ...................................................................................... 110
3.2.8.5 Real-Time Configuration Files ................................................................................................ 110
3.2.9 Restore user parameters on the newly installed SPOTS system ................................................ 112
3.2.9.1 Create Old SPOTS System Users in the New SPOTS System ................................................ 112
3.2.9.2 Restoring user-defined configuration parameters and files ...................................................... 112
3.2.9.2.1 Merge of generic long term files ............................................................................................ 112
3.2.9.2.2 Merge of virtual entities ......................................................................................................... 112
3.2.9.2.3 Restore of user tasks ............................................................................................................... 114
3.2.9.2.4 Merge of generic real time files.............................................................................................. 114

4 STARTING AND STOPPING SPOTS ....................................................................................................... 115


4.1 Stopping SPOTS ............................................................................................................................................ 115
4.1.1 Stopping SPOTS LT services only .................................................................................................... 116
4.1.2 Stopping SPOTS RT services only ................................................................................................... 116
4.1.3 Stopping SPOTS add-ons services only .......................................................................................... 117
4.2 Starting SPOTS .............................................................................................................................................. 119
4.2.1 Starting SPOTS Long-Term services only ....................................................................................... 120
4.2.2 Starting SPOTS Real-Time services only ........................................................................................ 120
4.2.3 Starting SPOTS add-ons services only ............................................................................................ 120
5 INSTALLING SUN SOLARIS 10 .............................................................................................................. 122
5.1 Installing System Patches ............................................................................................................................. 127
6 FAULT TOLERANCE WITH DISK MIRRORING.......................................................................................... 128
Configuring Disk Mirroring ............................................................................................................................. 129
6.1.1.1 Configuring System Disk Mirroring ........................................................................................ 129
6.2 Maintenance Procedures .............................................................................................................................. 132
6.2.1 Monitoring Tasks .................................................................................................................................. 132
6.2.1.1 Solaris Volume Manager Objects ............................................................................................ 132
6.2.1.2 Disk Failure Notification via Email ......................................................................................... 132
6.2.1.3 Verifying the Status of State Database Replicas ...................................................................... 133
6.2.1.4 Verifying Status of Metadevices.............................................................................................. 134
6.2.2 Replacing mirroring disks .................................................................................................................... 135
6.2.3 Booting system with insufficient database replicas ........................................................................ 136
6.2.4 Creating and Deleting Solaris Volume Manager Objects .............................................................. 137
6.2.4.1 Creating a Solaris Volume Manager State Database Replica ................................................. 137
6.2.4.2 Removing a Solaris Volume Manager State Database Replica................................................ 138
6.2.4.3 Creating a Solaris Volume Manager submirror ....................................................................... 138
6.2.4.4 Removing a Solaris Volume Manager submirror .................................................................... 138
6.2.4.5 Removing a Solaris Volume Manager Mirror and Submirrors ................................................ 139
6.2.4.6 Unmirroring a File System That Cannot Be Unmounted ......................................................... 141

ii E200613-01-115-V14.0I-34
Installation Guide (SPOTS V14.0) Nokia Siemens Networks S.A

6.2.5 Formatting Disks ................................................................................................................................... 143


6.2.6 Creating File Systems .......................................................................................................................... 144
6.2.7 Detect and terminate processes that are using a filesystem ........................................................ 146
7 SPOTS CONFIGURATIONS WITH EXTERNAL STORAGE ....................................................................... 148
7.1 Physical Connections (step 1)...................................................................................................................... 149
7.1.1 Medium A/B/C/D and Medium B1 Legacy Configuration – Single Server.................................. 149
7.1.2 Large and Large B1 Legacy Configuration – DB Server ............................................................... 152
7.2 Installing External Array Software (step 2) ................................................................................................ 154
7.2.1 Sun StorEdge 3320 SCSI Array Software........................................................................................ 154
7.2.2 Sun StorageTek ST2540 Common Array Software (CAM)........................................................... 155
7.3 External Storage Configuration and Hard Disk Partitioning (step 3) ..................................................... 169

8 INSTALLING ORACLE SOFTWARE ....................................................................................................... 170


8.1 Removing Oracle Software ........................................................................................................................... 171

9 INSTALLING SPOTS SOFTWARE (SOLARIS ENVIRONMENT) ............................................................. 172


9.1 Structure of Installation Procedure .............................................................................................................. 172
9.2 NIS / NIS+ or LDAP Users and Groups Requirements ........................................................................... 173
9.3 SPOTS software - Choice of packages for V14.0 Core .......................................................................... 174
9.4 Installing SPOTS Software V14.0 ............................................................................................................... 179
9.4.1 Installing SPOTS-PMC in Solaris environment ............................................................................ 182
9.5 System configuration issues ......................................................................................................................... 185
9.6 SPOTS Licensing Software .......................................................................................................................... 186
9.6.1 Installing a license ................................................................................................................................ 186
9.6.2 Dumping the installed licenses ........................................................................................................... 186
9.6.3 Removing the installed licenses ......................................................................................................... 186
9.7 Real-Time Configuration issues................................................................................................................... 188
9.7.1 Configuring a Distributed SPOTS Environment with Real-Time .................................................. 188
9.7.2 Configuring SPOTS RT Agency Software (Solaris environment) ............................................. 188
9.7.2.1 Configuring real_time.cfg files ................................................................................................ 188
9.7.3 Modifying the RT Agencies default memory (if desired) ................................................................ 190
9.7.4 Modifying the MonitorServer default memory (if desired).............................................................. 192
9.7.5 Configuring the events gateway file .................................................................................................. 192
9.7.6 Stop the SNMP Agent in Solaris ........................................................................................................ 193
9.7.7 Connecting SAA to an external Fault Management application ................................................... 193
9.7.8 Multiple Ethernet Cards on the Same Machine............................................................................... 195

10 INSTALLATION OF SPOTS V14 SOFTWARE (WINDOWS ENVIRONMENT ) ...................................... 196


10.1 Installing SPOTS-PMC................................................................................................................................ 196
10.1.1 Installation sequence ......................................................................................................................... 196
10.1.2 Troubleshooting .................................................................................................................................. 203
10.1.3 SPOTS License checking ................................................................................................................. 204
10.2 Installing SPOTS DOC Software ............................................................................................................... 205
11 TECHNOLOGY PLUG-INS (TPS) .......................................................................................................... 210
11.1 Documentation.............................................................................................................................................. 210
11.2 Installation / Upgrade / Uninstallation ....................................................................................................... 210
11.3 NMS Configuration....................................................................................................................................... 211

12 MODIFYING A SPOTS V14 INSTALLATION (WINDOWS ENVIRONMENT) ......................................... 212


12.1 SPOTS PMC ................................................................................................................................................. 212
12.2 SPOTS DOC ................................................................................................................................................. 216

13 UPDATING SPOTS SOFTWARE (WINDOWS ENVIRONMENT) .......................................................... 221


13.1 Updating SPOTS PMC................................................................................................................................ 221
13.2 Updating SPOTS DOC................................................................................................................................ 224

14 UNINSTALLING SPOTS SOFTWARE (SOLARIS ENVIRONMENT) ...................................................... 227


14.1 Removing SPOTS Packages (V14.0 Core-Drop 2) ............................................................................... 228
14.1.1 Removing SPOTS Packages with spots_installer ........................................................................ 228
14.2 Removing SPOTS Add-ons Packages (V14.0 Core-Drop 2) ............................................................... 230
14.2.1 Removing SPOTS Add-ons Packages with spots_installer ........................................................ 230

15 UNINSTALLING SPOTS SOFTWARE (WINDOWS ENVIRONMENT) ................................................... 232


15.1 Uninstalling SPOTS PMC ........................................................................................................................... 232

E200613-01-115-V14.0I-34 iii
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

15.1.1 Files not removed during de-installation ........................................................................................ 236


15.2 Uninstalling SPOTS DOC ........................................................................................................................... 237
16 ABBREVIATIONS .................................................................................................................................. 242
17 REFERENCES ...................................................................................................................................... 243
ANNEX 1 – UNIX ENVIRONMENT VARIABLES ........................................................................................... 245
ANNEX 2 – DOMAINS’ CONFIGURATION ................................................................................................... 249
ANNEX 3 – SERVER CONFIGURATION FILES ........................................................................................... 253
ANNEX 4 – SPOTS RT CONFIGURATION................................................................................................ 259
ANNEX 5 – CONFIGURATION WORKSHEET .............................................................................................. 268
ANNEX 6 – SYSTEM BACKUP & RESTORE ............................................................................................... 277
ANNEX 7 – EXTERNAL STORAGE SETUP FOR MEDIUM CONFIGURATION ............................................... 291
Spots StorEdge Medium A Configuration ......................................................................................................... 292
Spots StorEdge Medium B Configuration ......................................................................................................... 307
Spots StorageTek Medium C Configuration..................................................................................................... 325
Spots StorageTek Medium D Configuration..................................................................................................... 343
ANNEX 8 – EXTERNAL STORAGE SETUP FOR LARGE CONFIGURATION ................................................. 345
Spots StorEdge Large A Configuration ............................................................................................................. 346
Spots StorEdge Large B Configuration ............................................................................................................. 359
Spots StorageTek Large C Configuration ......................................................................................................... 372
Spots StorageTek Large D Configuration ......................................................................................................... 374
ANNEX 9 – STOREDGE 3320 SETUP FOR MEDIUM LEGACY CONFIGURATION ....................................... 375
ANNEX 10 – STOREDGE 3320 SETUP FOR LARGE LEGACY CONFIGURATION ....................................... 391
ANNEX 11 – SETTING UP LDAP CLIENT IN SOLARIS ............................................................................... 408
ANNEX 12 – CONFIGURING THE RS-232 SERIAL PORT CONNECTION ................................................... 411
ANNEX 13 – SUN SPARC ENTERPRISE M3000 SERVER SPOTS INSTALLATION .................................. 414
Post installation tasks Spots PMS Distributed Configuration ........................................................................ 415
ANNEX 14 – SUN SPARC ENTERPRISE M3000/ M4000 XSCF ................................................................. 416

iv E200613-01-115-V14.0I-34
Installation Guide (SPOTS V14.0) Nokia Siemens Networks S.A

List of Figures

Figure 1, Small and Medium Configurations, Single Server Environment ..................................... 22


Figure 2, Large Configuration, Distributed Environment.................................................................... 26
Figure 3, Initial Installation of a SPOTS V14 System............................................................................ 65
Figure 4, Upgrade on Existing Hardware from SPOTS V12 or V13 System ................................... 69
Figure 5, Upgrading from SPOTS V12/V13 System (Using New HW) .............................................. 86
Figure 6, SPOTS V14 Software Upgrade ................................................................................................. 91
Figure 7, SPOTS V14 Hardware Upgrade ................................................................................................ 92
Figure 8, Cable Configuration for StorEdge 3320 ............................................................................... 149
Figure 9, StorageTek ST2540 Medium C, configuration .................................................................... 150
Figure 10, Cable Configuration for StorEdge 3320 (Master) with JBOD ....................................... 153
Figure 11, StorageTek ST2540 with JBOD, Large C configuration ................................................ 154
Figure 12, CAM welcome screen ............................................................................................................. 156
Figure 13, CAM License Agreement ....................................................................................................... 157
Figure 14, CAM installation type ............................................................................................................. 157
Figure 15, CAM installation review ......................................................................................................... 158
Figure 16, CAM installation finished with success............................................................................. 158
Figure 17, CAM authentication web page ............................................................................................. 159
Figure 18, CAM first login .......................................................................................................................... 160
Figure 19, CAM site information form .................................................................................................... 161
Figure 20, CAM site information form saved successfully ............................................................... 162
Figure 21, CAM Storage System Summary........................................................................................... 162
Figure 22, CAM Registering the Storage System ................................................................................ 163
Figure 23, CAM Auto Discovery of Storage Systems ........................................................................ 164
Figure 24, CAM List of the Storage Systems Discovery ................................................................... 165
Figure 25, CAM status of the Storage Systems registration ............................................................ 166
Figure 26, CAM Storage Systems summary ......................................................................................... 167
Figure 27, CAM Storage Systems firmware upgrade ......................................................................... 168
Figure 28, Interface for StorEdge 3320 Configuration ....................................................................... 293
Figure 29, Main Menu window .................................................................................................................. 293
Figure 30, Main Menu Channel selection .............................................................................................. 294
Figure 31, Main Menu Unmap LUN.......................................................................................................... 294
Figure 32, Main Menu window .................................................................................................................. 295
Figure 33, Logical Drives table ................................................................................................................ 296
Figure 34, Actions for Logical Drives ..................................................................................................... 296
Figure 35, Create Logical Drive confirmation ...................................................................................... 298
Figure 36, Raid level selection ................................................................................................................. 298
Figure 37, Disk Selection ........................................................................................................................... 299
Figure 38, Logical Drive Creation confirmation................................................................................... 299
Figure 39, Second logical drive creation ............................................................................................... 300
Figure 40, Second logical drive disk selection .................................................................................... 300
Figure 41, Redundant controller assignment ...................................................................................... 301
Figure 42, Partition second logical drive .............................................................................................. 301
Figure 43, First partition ............................................................................................................................ 302
Figure 44, First partition ............................................................................................................................ 302
Figure 45, Logical Drive Selection .......................................................................................................... 303
Figure 46, Map Host Lun confirmation ................................................................................................. 303
Figure 47, Second host lun configuration ............................................................................................ 304
Figure 48, Host lun configuration ........................................................................................................... 304
Figure 49, Main Menu ................................................................................................................................. 305
Figure 50, Interface for StorEdge 3320 Configuration ....................................................................... 308
Figure 51, Main Menu window .................................................................................................................. 308
Figure 52, Main Menu Channel selection .............................................................................................. 309
Figure 53, Main Menu Unmap LUN.......................................................................................................... 309
Figure 54, Main Menu window .................................................................................................................. 310
Figure 55, Logical Drives table ................................................................................................................ 311

E200613-01-115-V14.0I-34 v
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 56, Actions for Logical Drives ..................................................................................................... 311


Figure 57, Create Logical Drive confirmation ...................................................................................... 312
Figure 58, Raid level selection ................................................................................................................. 313
Figure 59, Disk Selection ........................................................................................................................... 313
Figure 60, Stripe Size selection ............................................................................................................... 314
Figure 61, Logical Drive Creation confirmation................................................................................... 314
Figure 62, Second logical drive creation ............................................................................................... 315
Figure 63, Second logical drive disk selection .................................................................................... 315
Figure 64, Redundant controller assignment ...................................................................................... 316
Figure 65, Alter stripe size to 128KB for the second logical drive ................................................. 316
Figure 66, Second logical drive creation ............................................................................................... 317
Figure 67, Main Menu Host Luns ............................................................................................................. 317
Figure 68, Map Host Lun Controller selection ..................................................................................... 318
Figure 69, Map Host Lun Controller selection ..................................................................................... 318
Figure 70, Select drive................................................................................................................................ 319
Figure 71, Map Host Lun message dialog ............................................................................................. 319
Figure 72, Map Host Lun message dialog ............................................................................................. 320
Figure 73, Controller Selection ................................................................................................................ 320
Figure 74, Lun Selection ............................................................................................................................ 321
Figure 75, Logical drive selection dialog .............................................................................................. 321
Figure 76, Map Host Lun message dialog ............................................................................................. 322
Figure 77, Map Host Lun message dialog ............................................................................................. 322
Figure 78, Main Menu ................................................................................................................................. 323
Figure 79, CAM authentication web page ............................................................................................. 325
Figure 80, CAM Virtual Disks .................................................................................................................... 326
Figure 81, CAM create Virtual Disks configuration ............................................................................ 327
Figure 82, CAM create Virtual Disks configuration ............................................................................ 328
Figure 83, CAM create Virtual Disks, specify mirror pairs................................................................ 328
Figure 84, CAM create Virtual Disks, specify mirror pairs................................................................ 329
Figure 85, CAM create Virtual Disks, specify mirror pairs................................................................ 329
Figure 86, CAM create Virtual Disks, specify mirror pairs................................................................ 330
Figure 87, Create CAM Virtual Disks, configure volume ................................................................... 331
Figure 88, CAM Create Virtual Disks, specify volume mapping ...................................................... 331
Figure 89, CAM Create Virtual Disks, select Host or Host Group ................................................... 332
Figure 90, CAM Create Virtual Disks, review configuration ............................................................. 332
Figure 91, CAM Create Virtual Disks summary on Storage Systems ............................................ 333
Figure 92, CAM Create Virtual Disks configuration ............................................................................ 334
Figure 93, CAM Create Virtual Disks configuration ............................................................................ 334
Figure 94, CAM create Virtual Disks, specify mirror pairs................................................................ 335
Figure 95, CAM Create Virtual Disks, specify mirror pairs ............................................................... 335
Figure 96, CAM Create Virtual Disks, specify mirror pairs ............................................................... 336
Figure 97, CAM Create Virtual Disks, specify mirror pairs ............................................................... 336
Figure 98, Create CAM Virtual Disks, configure volume ................................................................... 337
Figure 99, CAM create Virtual Disks, specify volume mapping ...................................................... 337
Figure 100, CAM create Virtual Disks, select Host or Host Group ................................................. 338
Figure 101, CAM create Virtual Disks, review configuration............................................................ 338
Figure 102, CAM create Virtual Disks summary on Storage Systems........................................... 339
Figure 103, CAM create Virtual Disks summary on Storage Systems........................................... 340
Figure 104, CAM Volume Summary on Storage Systems ................................................................. 341
Figure 105, CAM Mapping Summary on Storage Systems ............................................................... 341
Figure 106, CAM Current Job Summary on Storage Systems ........................................................ 342
Figure 107, Interface for StorEdge 3320 Configuration ..................................................................... 347
Figure 108, Main Menu window................................................................................................................ 347
Figure 109, Main Menu Channel selection ............................................................................................ 348
Figure 110, Main Menu Unmap LUN ....................................................................................................... 348
Figure 111, Main Menu window................................................................................................................ 349
Figure 112, Logical Drives table .............................................................................................................. 349
Figure 113, Actions for Logical Drives .................................................................................................. 350
Figure 114, Create Logical Drive confirmation .................................................................................... 351

vi E200613-01-115-V14.0I-34
Installation Guide (SPOTS V14.0) Nokia Siemens Networks S.A

Figure 115, Raid level selection ............................................................................................................... 351


Figure 116, Disk Selection......................................................................................................................... 352
Figure 117, Logical Drive Creation confirmation ................................................................................ 352
Figure 118, Second logical drive creation ............................................................................................ 353
Figure 119, Second logical drive disk selection .................................................................................. 353
Figure 120, Secondary controller assignment ..................................................................................... 354
Figure 121, Logical drive creation........................................................................................................... 354
Figure 122, Channel 1 Selection .............................................................................................................. 355
Figure 123, Selecting the first empty slot. ............................................................................................ 355
Figure 124, Logical Drive selection ........................................................................................................ 356
Figure 125, Map Host Lun confirmation ................................................................................................ 356
Figure 126, Second Host Lun confirmation .......................................................................................... 357
Figure 127, Main Menu ............................................................................................................................... 357
Figure 128, Interface for StorEdge 3320 Configuration ..................................................................... 360
Figure 129, Main Menu window................................................................................................................ 360
Figure 130, Main Menu Channel selection ............................................................................................ 361
Figure 131, Main Menu Unmap LUN ....................................................................................................... 361
Figure 132, Main Menu window................................................................................................................ 362
Figure 133, Logical Drives table .............................................................................................................. 363
Figure 134, Actions for Logical Drives .................................................................................................. 363
Figure 135, Create Logical Drive confirmation .................................................................................... 364
Figure 136, Raid level selection ............................................................................................................... 364
Figure 137, Disk Selection......................................................................................................................... 365
Figure 138, Logical Drive Creation confirmation ................................................................................ 365
Figure 139, Second logical drive creation ............................................................................................ 366
Figure 140, Second logical drive disk selection .................................................................................. 366
Figure 141, Secondary controller assignment ..................................................................................... 367
Figure 142, Logical drive creation........................................................................................................... 367
Figure 143, Channel 1 Selection .............................................................................................................. 368
Figure 144, Selecting the first empty slot. ............................................................................................ 368
Figure 145, Logical Drive selection ........................................................................................................ 369
Figure 146, Map Host Lun confirmation ................................................................................................ 369
Figure 147, Second Host Lun confirmation .......................................................................................... 370
Figure 148, Main Menu ............................................................................................................................... 370
Figure 149, Interface for StorEdge 3320 Configuration ..................................................................... 377
Figure 150, Main Menu window................................................................................................................ 377
Figure 151, Main Menu Channel selection ............................................................................................ 378
Figure 152, Main Menu Unmap LUN ...................................................................................................... 378
Figure 153, Main Menu window................................................................................................................ 379
Figure 154, Logical Drives table .............................................................................................................. 379
Figure 155, Actions for Logical Drives .................................................................................................. 380
Figure 156, Create Logical Drive confirmation .................................................................................... 381
Figure 157, Raid level selection ............................................................................................................... 381
Figure 158, Disk Selection......................................................................................................................... 382
Figure 159, Logical Drive Creation confirmation ................................................................................ 382
Figure 160, Second logical drive creation ............................................................................................ 383
Figure 161, Second logical drive disk selection .................................................................................. 383
Figure 162, Redundant controller assignment .................................................................................... 384
Figure 163, Channel 1 Selection .............................................................................................................. 385
Figure 164, Logical Drive Selection ........................................................................................................ 385
Figure 165, Selecting the first empty slot ............................................................................................. 386
Figure 166, Logical Drive selection ........................................................................................................ 386
Figure 167, Map Host Lun confirmation ................................................................................................ 387
Figure 168, Host Lun creation confirmation ......................................................................................... 387
Figure 169, Second host lun configuration .......................................................................................... 388
Figure 170, Third host lun configuration ............................................................................................... 388
Figure 171, Main Menu ............................................................................................................................... 389
Figure 172, Interface for StorEdge 3320 Configuration ..................................................................... 393
Figure 173, Main Menu window................................................................................................................ 393

E200613-01-115-V14.0I-34 vii
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 174, Main Menu Channel selection ............................................................................................ 394


Figure 175, Main Menu Unmap LUN ....................................................................................................... 394
Figure 176, Main Menu window................................................................................................................ 395
Figure 177, Logical Drives table .............................................................................................................. 395
Figure 178, Actions for Logical Drives .................................................................................................. 396
Figure 179, Create Logical Drive confirmation .................................................................................... 396
Figure 180, Raid level selection ............................................................................................................... 397
Figure 181, Disk Selection......................................................................................................................... 397
Figure 182, Logical Drive Creation confirmation ................................................................................ 398
Figure 183, Second logical drive creation ............................................................................................ 399
Figure 184, Second logical drive disk selection .................................................................................. 399
Figure 185, Secondary controller assignment ..................................................................................... 400
Figure 186, Logical drive creation........................................................................................................... 400
Figure 187, Final state of the four Logical Drive creation processes ............................................ 401
Figure 188, Channel 1 Selection .............................................................................................................. 402
Figure 189, Logical Drive Selection ........................................................................................................ 402
Figure 190, Selecting the first empty slot. ............................................................................................ 403
Figure 191, Logical Drive selection ........................................................................................................ 403
Figure 192, Map Host Lun confirmation ................................................................................................ 404
Figure 193, Host Lun creation confirmation ......................................................................................... 404
Figure 194, Second host lun configuration .......................................................................................... 405
Figure 195,Third host lun configuration ................................................................................................ 405
Figure 196, Main Menu ............................................................................................................................... 406
Figure 197, HyperTerminal configuration ............................................................................................. 412
Figure 198, Connection properties ......................................................................................................... 413
Figure 199, Connection to the XSCF through a switch ..................................................................... 417
Figure 200, Switch from XSCF to OK prompt and boot from CDROM........................................... 418

viii E200613-01-115-V14.0I-34
Installation Guide (SPOTS V14.0) Nokia Siemens Networks S.A

List of Tables

Table 1 - Installation Distribution Media for SPOTS V14........................................................................... 12


Table 2 – Textual and graphic conventions ................................................................................................. 15
Table 3 - Standard HW Configurations (Single Server) ............................................................................. 21
Table 4 – Legacy HW Configurations (Single Server Environment) ........................................................ 23
Table 5 – Standard HW Configurations (Distributed Environment) ........................................................ 25
Table 6 –Legacy HW Configurations (Distributed Environment) .............................................................. 27
Table 7 - Hardware Configurations & Database Installation Types ......................................................... 28
Table 8 – Hardware requirements for SPOTS PMC .................................................................................. 30
Table 9 - Disk Partitioning, Small A Configuration ...................................................................................... 32
Table 10 - Disk Partitioning, Small B Configuration ................................................................................... 34
Table 11 - Disk Partitioning, Small Configuration C ................................................................................... 35
Table 12 - Disk Partitioning, Small Configuration D ................................................................................... 37
Table 13 - Disk Partitioning, Medium Configuration A, Internal Disks ..................................................... 39
Table 14 - Disk Partitioning, Medium Configuration A, External Disks.................................................... 40
Table 15 - Disk Partitioning, Medium Configuration B, Internal Disks ..................................................... 41
Table 16 - Disk Partitioning, Medium Configuration B, External Disks.................................................... 41
Table 17 - Disk Partitioning, Medium Configuration C, Internal Disks ..................................................... 43
Table 18 - Disk Partitioning, Medium Configuration C, External Disks ................................................... 43
Table 19 - Disk Partitioning, Medium Configuration D, Internal Disks ..................................................... 45
Table 20 - Disk Partitioning, Medium Configuration D, External Disks ................................................... 45
Table 21 - Disk Partitioning, Medium Configuration D, External Disks ................................................... 45
Table 22 - Disk Partitioning, Large Configuration A, Internal Disks – DB Server .................................. 48
Table 23 - Disk Partitioning, Large Configuration A, External Disks – DB Server ................................. 48
Table 24 - Disk Partitioning, Large Configuration B, Internal Disks – DB Server .................................. 49
Table 25 - Disk Partitioning, Large Configuration B, External Disks – DB Server ................................. 49
Table 26 - Disk Partitioning, Large Configuration C, Internal Disks – DB Server .................................. 50
Table 27 - Disk Partitioning, Large Configuration C, External Disks – DB Server ................................ 51
Table 28 - Disk Partitioning, Large Configuration D, Internal Disks – DB Server .................................. 52
Table 29 - Disk Partitioning, Large Configuration D, External Disks – DB Server ................................ 52
Table 30 - Disk Partitioning, Large Configuration D, External Disks ....................................................... 52
Table 31 - Disk Partitioning, Large Configuration - Application Server ................................................... 53
Table 32 - Disk Partitioning, Large Configuration - Application Server ................................................... 54
Table 33 - Disk Partitioning, Small Configuration – Type A1 and B1 ...................................................... 55
Table 34 - Disk Partitioning, Small Configuration – Type B2 .................................................................... 56
Table 35 - Disk Partitioning, Legacy Medium Configuration B1 ............................................................... 58
Table 36 - Disk Partitioning, Legacy Medium Configuration B1 – External Disks ................................. 58
Table 37 - Disk Partitioning, Legacy Large Configuration B1, Internal Disks – DB Server .................. 60
Table 38 - Disk Partitioning, Large Configuration B1, External Disks – DB Server............................... 60
Table 39 - Disk Partitioning, Legacy Large Configuration B1, Internal Disks – AS Server .................. 61

E200613-01-115-V14.0I-34 ix
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

1 Introduction

This document describes the installation procedures for SPOTS V14, for both the Long-Term part
and the (optional) Real-Time part.

10 E200613-01-115-V14.0I-34
1.1 Scope
The SPOTS product belongs to the family of Network Management Systems provided by Nokia
Siemens Networks (ICM N), for the Operation and Maintenance of Mobile Networks (Core, GERAN
and UTRAN sub-networks).
The SPOTS product integrates the platform (consisting of hardware and standard software) and
application software that supports the Performance Management activities on the network.
The SPOTS V14 application software includes a mandatory Long-Term part and an optional,
additional Real-Time part.
The SPOTS V14 Long Term provides performance management analysis capabilities, allowing to
produce pre-defined and user-defined reports with Performance Management indicators based on
data periodically collected from the network.
The SPOTS V14 Real Time part provides near real-time updates of the network Performance
Management information, allowing to define threshold values for Performance Management
indicators that result in alarms when they are violated. The optional module "SNMP Alarm Agent"
allows these alarms to be forwarded via SNMP to an external application.

E200613-01-115-V14.0I-34 11
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

1.2 Installation Distribution Media for SPOTS V14 (DVDs)


The following table lists all the media that are referred in this Installation Guide and that are
necessary to install the SPOTS V14.

Media name Ordering Number Quantity


Solaris 10
Solaris 10 10/08 Software DVD for M3000/ M4000 Controlled by Nokia 1
Siemens Networks
Solaris 10 08/07 Software DVD for legacy hardware Controlled by Nokia 1
only Siemens Networks
Spots V14
SPOTS Performance Management V14.0 Core DVD 1
Oracle Installation Packages DVD 1
OEM Patches DVD 1
Technology Plug-Ins for Solaris DVD 1
SPOTS V14.0 Appl Patches DVD 1
Table 1 - Installation Distribution Media for SPOTS V14

12 E200613-01-115-V14.0I-34
1.3 SPOTS documentation
The SPOTS application documentation consists of the following parts:

Manuals
Document the application and provide the necessary information for its operation.
They are available (with the SPOTS SW) in the SPOTS distribution media, in the SPOTS
Performance Management V14.0 Core DVD (refer to Table 1 - Installation Distribution Media for
SPOTS ).

Release Notes
Summarizes the authorized versions of the hardware and software products and up-to-date
information about user notes, functional limitations and error corrections. The Release Notes are
applied for a specific release of this SPOTS version (including both the Long-Term and the Real-
Time part).
SPOTS V14.0.x – Release Notes — refer to [2]

On-line help
Provides a complete “on-context “ description of the system functionality.

E200613-01-115-V14.0I-34 13
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

1.4 Target group and structure of this manual


The Installation Guide is intended for System Administrators and for those who install the SPOTS
product. On the server side, it describes the necessary procedures to perform the initial installation,
APS upgrades, configuration and de-installation of the SPOTS system 1), specific chapters are
dedicated to the installation and de-installation of the Windows variant (2003 / XP) of the SPOTS
Client.
Chapter 2 describes the functional components of the SPOTS system. It also lists the information
that must be gathered before starting the initial installation.
Chapter 3 contains an overview of the required procedures for the initial installation and system
upgrade.

Chapter 4 describes the instructions on how to start and stop SPOTS services.

Chapter 5 describes how to install the Solaris 10 Operating System and the associated patches.

Chapter 6 describes how to install, maintain and uninstall Fault Tolerance with Disk Mirroring.

Chapter 7 describes how to install and configure the StorEdge 3320 array.

Chapter 8 describes in detail the necessary actions that must be performed to install Oracle
software according to SPOTS requirements.

Chapter 9 deals with the installation of SPOTS PMS (including RTA) and PMC components on
Solaris environments.

Chapter 10 describes how to install SPOTS Software in a Windows environment.

Chapter 11 describes how to install the Technology Plug-Ins associated with the existing software
versions of each managed Network Element.

Chapter 12 describes how to modify a SPOTS Software installation (Windows environment).

Chapter 13 describes how to update SPOTS Software (Windows environment).

Chapter 14 describes how to uninstall all installed PMS components on Solaris.

Chapter 15 describes how to uninstall SPOTS Software (Windows environment).

Chapter 16 lists the abbreviations used in this document.

Chapter 17 lists the references to other documents that are referred in this Installation Guide.

 Perform the installation procedures as described in this manual. Deviations can


result in malfunctioning.

1) As a prerequisite, please read the corresponding Release Notes.

14 E200613-01-115-V14.0I-34
1.4.1 Document Conventions
The following textual and graphic conventions are used in this document:

Convention Meaning

# The UNIX super-user default prompt.
Boldface Emphasizes important word or concept.
An expression enclosed within angle brackets indicates some value to
<expression> be input that cannot be determined in beforehand and so it should not
be taken literally.
 Important notice or warning.
 Action that requires user interaction.
 Note or relevant event triggered by some action.
 Indicates that a certain procedure is completed.
Table 2 – Textual and graphic conventions

E200613-01-115-V14.0I-34 15
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

2 General Overview

The SPOTS system comprises several functional components, grouped into installable SW
packages according to a Client/Server approach:

• SPOTS Performance Management Client (PMC)

Client GUI package:


• SPOTS Client GUI (spotsCL)

This client package is available for Solaris and the Windows 2003 and XP Operating
Systems.

• SPOTS Performance Management Server (PMS)

Long-Term packages:
• SPOTS Application Server (spotsAS)
• SPOTS Database (spotsDB)
• SPOTS Database Server (spotsDS)
• SPOTS Naming Server (spotsNS)

Real-Time packages: (optional)


• SPOTS Real Time Database (spotsRTDB)
• SPOTS Real Time Server (spotsRTS)
• SPOTS Real Time Agency (spotsRTA)
• SPOTS SNMP Alarm Agent (spotsSAA)

16 E200613-01-115-V14.0I-34
2.1 Deployment of SPOTS components

All SPOTS PMS components are available on Solaris 10.


SPOTS PMS components may be installed in two different configuration types: Single Server and
Distributed, according to the chosen HW configuration (see Section 2.3).
In the Small and Medium configurations, the Single Server installation applies (all SPOTS
components installed on a single host); see Section 9.3 for more details on which components to
install.
In the Large configuration, the Distributed installation applies – in this case, the DB-related
components are installed on a host (DB Server) and the Application-related components are
installed on another host (Application Server), see Section 9.3 for more details on which
components to install.
 Other forms of distribution can be implemented on a project-specific basis. Your
local Nokia Siemens Networks representative will help you determine which
configuration best fits your needs.
Additionally to the above described distribution possibilities, the spotsRTA package may be
installed on Solaris 10 on the same machine as the spotsAS or/and on a separate machine to
allow load distribution. It is strongly recommended to install spotsRTA on a Solaris 10 machine.
Complete information about how to install and de-install SPOTS PMS is presented along with
SPOTS PMS installation and de-installation in Chapters 9 and 14. Details on the deployment of
PMS packages according to the chosen HW configuration are given in Section 9.3.

SPOTS PMC will be installed on each user’s workstation implementing the SPOTS Graphical User
Interface (GUI).

The SPOTS PMC component (SCL) is available in the following operating systems:
• Solaris 10
• Windows 2003
• Windows XP
It is possible to install the SPOTS PMC on the same system as the SPOTS PMS. However, due to
performance reasons, intensive usage of PMC in this configuration (e.g. for frequent execution of
performance reports) is not recommended. In a normal operation environment, this SPOTS PMC
configuration is suitable only for sporadic actions (e.g. administrative actions).
Complete information about how to install and de-install SPOTS PMC in a system with Windows
is presented in Chapters 10 and Chapter 15 whilst SPOTS PMC installation and de-installation in
Solaris is presented along with SPOTS PMS installation and de-installation in Chapters 9 and 14.

E200613-01-115-V14.0I-34 17
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

2.2 Technology Plug-Ins


In SPOTS V14, support for the interfaced NMS systems and managed NE types and versions is
achieved by means of Technology Plug-Ins (TPs).
After installing/upgrading SPOTS-PMS and PMC, it is mandatory to install the Technology Plug-Ins
associated with the existing software versions of each interfaced NMS and managed NE.
The installer shall create a list of required software versions, based on the existing network
elements’ software versions.
With the required software versions list, one or more technology plug-ins shall be selected for
installation.
For more detailed information concerning technology plug-ins, see:
• The “Technology Plug-Ins” chapter of the User Manual;
• The TPs documentation, included on the TPs distribution DVD in HTML format. It can be
viewed with a regular web browser by opening the page:
/cdrom/cdrom0/doc/TpDocStart.htm

18 E200613-01-115-V14.0I-34
2.3 Platform Hardware & Standard Software

2.3.1 SPOTS PMS, PMC and RTA (Solaris environment)

2.3.1.1 Hardware

For SPOTS PMC, any UltraSPARC-III based workstation (minimum configuration is described on
the Release Notes).
For SPOTS RTA, and depending on the managed network size, it can be installed on the same
machine as SPOTS PMS or in a separate machine (refer to the possible hardware configuration s
described below).
For SPOTS PMS, and depending on the managed network size, the following configurations are
certified for this SPOTS version:
 The used HW should be chosen according to the information provided by your
Nokia Siemens Networks representative, who will help you determine which
configuration best fits your network.

 The described HW configurations are applied to the Single Server environment


(Small, Medium) and to the Distributed environment (Large).
More details on Single Server and Distributed environments are described in [1], Section 1.2
(Environment). Other possibilities for HW configurations will be provided on request.

In the following pages the possible hardware configurations are presented in four tables, for each
specific environment:

 Single Server Environment:


• Standard Configurations (see Table 3 - Standard HW Configurations)
• Legacy Configurations (see Table 4 – Legacy HW Configurations)

 Distributed Environment:
• Standard Configurations (see Table 5 – Standard HW Configurations
(Distributed Environment)
• Legacy Configurations (see Table 6 –Legacy HW Configurations)

The following applies also to the HW configurations:

• Configurations with letter “A” are of type floor stand.


• Configurations with letters “B, C & D” or with no letter are of type rack mounted.

E200613-01-115-V14.0I-34 19
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

2.3.1.2 Single Server Environment

Standard Configurations

Expansion
HW Configuration Function Machine CPU’s RAM Internal Disks External Storage
supported?

Application Server +
Small A No Sun Ultra 45 1 4 GB 2 x 146 GB -
Database Server
Application Server +
Small B Yes Sun Fire V445 2 8 GB 8 x 146 GB -
Database Server
Application Server +
Small C Yes Sun Fire V490 2 8 GB 2 x 146 GB -
Database Server
Application Server + Sun Sparc
Small D Yes 1 quad 16 GB 4x 146 GB -
Database Server Enterprise M3000

Storedge 3320
Application Server +
Medium A Yes Sun Fire V445 2 8 GB 8 x 146 GB 2 Raid Controllers
Database Server
12 x 146 GB
Storedge 3320
Application Server +
Medium B Yes Sun Fire V490 2 8 GB 2 x 146 GB 2 Raid Controllers
Database Server
12 x 146 GB
StorageTek ST2540
Application Server + Sun Sparc 2 Raid Controllers
Medium C Yes 1 quad 16 GB 4 x 146 GB
Database Server Enterprise M3000
12 x 146 Gb or 12 x
300 GB
StorageTek ST2540 +
Application Server + Sun Sparc
Medium D Yes 1 quad 16 GB 4 x 146 GB StorageTek ST2501
Database Server Enterprise M3000
2 Raid Controllers

20 E200613-01-115-V14.0I-34
12 x 300 GB + 12 x
1TB

Table 3 - Standard HW Configurations (Single Server)


Consult Figure 1, Small and Medium Configurations, Single Server Environment in order to get an overview of the applicable HW
configuration.

E200613-01-115-V14.0I-34 21
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

SMALL A (floor) SMAL B/C/D (rack) Medium A/B/C/D

Sun Ultra 45 V445/V490/M3000 V445/V490/M3000


Storedge 3320
Application Server Application Server Application Server
Database Server Database Server Database Server StorageTek 2540
StorageTek 2501
[ DB Type: Small ] [ DB Type: Small ] [ DB Type: Medium ]

Internal Tape Drive

Sun DAT 72 External Tape Drive

Sun Storedge C2
External Tape Drive
Sun StorageTek SL24 V445
Sun Storedge C2
Sun StorageTek SL24 WebReports
V445

WebReports
Expandable
to
Expandable
to
Default Large
Optional
Medium

Figure 1, Small and Medium Configurations, Single Server Environment

22 E200613-01-115-V14.0I-34
Legacy Configurations from SPOTS V12 and V13

Expansion
HW Configuration Function Machine CPU’s RAM Internal Disks External Storage
supported?

Application Server +
Small A1 No Sun Fire V250 1 4 Gb 3 x73 Gb -
Database Server
Application Server +
Small B1 Yes Sun Fire V440 2 4 Gb 4 x 73 Gb -
Database Server

Application Server + SE 3320/3310, 2 RCTL


Medium B1 Yes Sun Fire V440 2 4 Gb 4 x 73 Gb
Database Server 12 x 73 / 146 Gb

Table 4 – Legacy HW Configurations (Single Server Environment)

E200613-01-115-V14.0I-34 23
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

2.3.1.3 Distributed Environment Large

Standard Configurations

Expansion
Hardware Configuration Function Machine CPU’s/Core RAM Internal Disks External Storage
supported?

Application Server A No V490 4 dual 16 Gb 2 x 146 Gb

Large Storedge 3320


Database Server A
A No V445 2 8 Gb 8 x 146 Gb 2 Raid Controllers +
JBOD (24 x 146 Gb)

Application Server A No V490 4 dual 16 Gb 2 x 146 Gb

Large Storedge 3320


Database Server B
B No V490 2 dual 16 GB 2 x 146 Gb 2 Raid Controllers +
JBOD (24 x 146 Gb)

Application Server B No M4000 2 dual 32 Gb 2 x 146 Gb

Large StorageTek 2540


Database Server C 2 Raid Controllers +
C No M3000 1 quad 16 GB 4 x 146 Gb
JBOD (24 x 146 Gb
or 24 x 300 Gb )

Large Application Server B No M4000 2 dual 32 Gb 2 x 146 Gb

24 E200613-01-115-V14.0I-34
D Database Server D No M3000 1 quad 16 GB 4 x 146 Gb StorageTek 2540
StorageTek 2501
2 Raid Controllers
(12 x 300 Gb + 12 x
1TB )

Table 5 – Standard HW Configurations (Distributed Environment)

Consult Figure 2, Large Configuration, Distributed Environment in order to get an overview of the applicable HW configuration.

E200613-01-115-V14.0I-34 25
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Large A/B/C/D

V445/V490/M3000 Storedge 3320/


V490/M4000
StorageTek 2540
Database Server
Application Server
[ Master ]
[ DB Type: Large ]

External Tape Drive Storedge 3320/


StorageTek 2540
Sun Storedge C2/ (JBOD)/
Sun StorageTek SL24 StorageTek 2501

[ Slave ]
V445

WebReports

Default

Optional

Figure 2, Large Configuration, Distributed Environment

26 E200613-01-115-V14.0I-34
Legacy Configurations from SPOTS V12

Expansion
HW Configuration Function Machine CPU’s RAM Internal Disks External Storage
supported?

Application Server No Sun Fire V440 4 8 Gb 4 x 73 Gb -

Storedge 3320
Large B1
2 Raid Controllers +
Database Server No Sun Fire V440 2 4 Gb 4 x 73 Gb
JBOD
24 x 73 Gb

Table 6 –Legacy HW Configurations (Distributed Environment)

E200613-01-115-V14.0I-34 27
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

2.3.1.4 Hardware Configurations & Database Installation Types

In the following tables is presented the relation between the existing HW configurations, both
Standard and Legacy (greyed) and the applicable Database Installation Types to be selected
when installing the SPOTS application

Hardware
Database Installation Type
Configuration

Small Medium Large

Small A, B, C, D 


Medium A, B, C, D

Large A, B, C, D 

Small Legacy

(Type A1, B1)
Medium Legacy

(Type B1)
Large Legacy

(Type B1)

Table 7 - Hardware Configurations & Database Installation Types

 For all the options described on the above table the Database MUST be partitioned

 For explicit details please consult the following sections:


• 2.4.1.1.1 Small A (2x146GB) Configuration
• 2.4.1.1.2 Small B (8x73GB / 146GB) Configuration
• 2.4.1.1.3 Small C (2x146GB) Configuration
• 2.4.1.1.4 Small D (4x146GB) Configuration
• 2.4.1.1.5 Medium A (8x73GB / 146GB) Configuration
• 2.4.1.1.6 Medium B (2x146GB) Configuration
• 2.4.1.1.7 Medium C (4x146GB) Configuration
• 2.4.1.1.8 Medium D (4x146GB) Configuration
• 2.4.1.2.1 Large Configuration – DB Server A
• 2.4.1.2.2 Large Configuration – DB Server B
• 2.4.1.2.3 Large Configuration – DB Server C
• 2.4.1.2.4 Large Configuration – DB Server D
• 2.4.1.2.5 Large Configuration – Application Server A
• 2.4.1.2.6 Large Configuration – Application Server B
• 2.4.2.1.1 Legacy Small A1 or Small B1 (3x73GB) Configuration
• 2.4.2.1.2 Legacy Small B2 (4x73GB) Configuration
• 2.4.2.1.3 Legacy Medium B1 (2+2x73GB) Configuration

28 E200613-01-115-V14.0I-34
• 2.4.2.2.1 Legacy Large B1 Configuration – DB Server
• 2.4.2.2.2 Large Configuration – Application Server

2.3.1.5 Customized Configurations

In these cases contact your local Nokia Siemens Networks representative to adjust all configuration
details and receive additional information on disk partitioning rules.

Your Nokia Siemens Networks representative can assist you in determining the SPOTS
configuration that best fits the size of your network.

2.3.1.6 Additional Items

The following additional items might be included:


• (Standard) 21 inch colour Monitor;
• DVD-ROM drive;
• Colour Printer.

 For other configurations than Small, a Legato based solution is available for “Backup &
Restore”. This solution requires the usage of an auto-changer device.

2.3.1.7 Standard Software

The following standard software products are used in SPOTS:


• Solaris Operating System (Solaris 10).
• Common Desktop Environment (CDE, V1.6) — available in the Solaris DVD.
• Oracle RDBMS Server software (Oracle 10g Database 64 bit).

2.3.1.8 SPOTS Software

The SPOTS SW consists of several packages, collectively known as APS — see Chapter 9 for a
detailed description of the SPOTS software packages.

E200613-01-115-V14.0I-34 29
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

2.3.2 SPOTS PMC Windows environment

Hardware
The following configuration is required:

Requirement Value Comment


Processor Pentium or equivalent Suggested
Memory 1 Gb Suggested
Virtual memory 1 Gb Suggested
Hard disk space 55 MB Mandatory
Video resolution 800x600 pixels Mandatory
Video depth 256 colors Mandatory
Table 8 – Hardware requirements for SPOTS PMC

The available hard-disk space must be located on a local non-removable drive.

Standard Software
The Windows SPOTS Client uses the following standard software products:
• Microsoft Windows 2003 with Service Pack 1 (or higher) or Windows XP.
• TCP/IP protocol.

SPOTS Software
The SPOTS-PMC Windows variant is installed according to the procedure described in Chapter
10.

30 E200613-01-115-V14.0I-34
2.4 Hard Disk Partitioning
This section describes the disk partitions to be created, for each of the SPOTS V14 supported HW
configurations (presented in Section 2.3).

 For other HW configurations, contact your local Nokia Siemens Networks


representative.

 The N.R. acronym from now on, means the logical drive is Not Raid.

 The F.D. acronym from now on, means Full Disk.

 The R_disk_<disk_number> acronym from now on, means the remaining


space left on disk <disk_number>, for example, R_disk_1 is the remaining
space left on disk 1.

The tables presented in the next sections describe, for each supported HW configuration, the
partition size, the hard disk where it is placed (or set of disks, if RAID is used), and (if any) the RAID
option used.

IMPORTANT NOTES TO ALL CONFIGURATIONS:

 Never use slice 2 when creating a new partition, because it is used by the
operating system.

 For all HW configurations with Single Server Environment, it is assumed that both DB
Server and Application Server components are on the same machine.

E200613-01-115-V14.0I-34 31
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

2.4.1 Standard Configurations

2.4.1.1 Single Server Configurations

2.4.1.1.1 Small A (2x146GB) Configuration


This is the configuration that will be used on the Sun Ultra 45 for V14.

Disk 0 (146GB)
Slice Mount point Size (GB) UFS mount option

S0 / 18
S1 swap 4
S2
S3 /spots_db1 8 forcedirectio,nologging
s4 /spots_db3 56 forcedirectio,nologging
s5 /spots_db4 R_disk forcedirectio,nologging
s6
s7
Disk 1 (146GB)
Slice Mount point Size (GB) UFS mount option

s0 /export/hom 4
e
s1 /var/opt 10
s2
s3 /opt 10
s4 /spots_db2 8 forcedirectio,nologging
s5 /spots_db5 56 forcedirectio,nologging
s6 /spots_db6 R_disk forcedirectio,nologging
s7

Table 9 - Disk Partitioning, Small A Configuration


To define the various mount options (see Table 9, column ‘UFS mount option’), edit the file
‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or re-
mount of each file system, the new mount option will take effect. An example is presented below.

Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging

32 E200613-01-115-V14.0I-34
/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

2.4.1.1.2 Small B (8x73GB / 146GB) Configuration


This is the configuration that will be used on the Sun Fire V445 for V14.

Disk 0 (73GB /146 GB)


Slice Mount point Size (GB) UFS mount option
S0 /replica1 1/1
S1 / 18 / 36
S2
S3 swap 8/8
s4 /var/opt 10 / 50
s5 /opt 25/ 30
s6 /export/home R_disk
s7 /spots_rman 2/2
Disk 1 (73GB /146 GB)
Slice Mount point Size (GB) UFS mount option
s0 /replica2 1/1
s1 /spots_db1 8/8 forcedirectio,nologging
s2
s3 /spots_db4 R_disk forcedirectio,nologging
s4
s5
s6
s7
Disk 2 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
S0 /replica3 1/1
S1 /spots_db2 8/8 forcedirectio,nologging
S2
S3 /spots_db3 R_disk forcedirectio,nologging
s4 /spots_db7 20 / 42
s5
s6
s7
Disk 3 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
s0 /replica4 1/1
s1 /spots_db5 45 / 67 forcedirectio,nologging
s2
s3 /spots_db6 R_disk forcedirectio,nologging
s4
s5
s6
s7
Disk 4 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option

E200613-01-115-V14.0I-34 33
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

73 / 146

S0 /replica5 1/1
S1 /root_mirror 18 / 36
S2
S3 /swap_mirror 8/8
s4 /var_opt_mirror 10 / 50
s5 /opt_mirror 25 / 30
s6 /home_mirror R_disk
s7 /spots_rman_mirror 2/2
Disk 5 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
s0 /replica6 1/1
s1 /spots_db1_mirror 8/8
s2
s3 /spots_db4_mirror R_disk
s4
s5
s6
s7
Disk 6 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
S0 /replica7 1/1
S1 /spots_db2_mirror 8/8
S2
S3 /spots_db3_mirror R_disk
s4 /spots_db7_mirror 20 / 42
s5
s6
s7
Disk 7 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
s0 /replica8 1/1
s1 /spots_db5_mirror 45 / 67
s2
s3 /spots_db6_mirror R_disk
s4
s5
s6
s7

Table 10 - Disk Partitioning, Small B Configuration

To define the various mount options (see Table 10, column ‘UFS mount option’), edit the file
‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or re-
mount of each file system, the new mount option will take effect. An example is presented below.

34 E200613-01-115-V14.0I-34
Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

2.4.1.1.3 Small C (2x146GB) Configuration


This is the configuration that will be used on the Sun Fire 490 for V14.

Disk 0 (146GB)
Slice Mount point Size (GB) UFS mount option

S0 / 18
S1 swap 4
S2
S3 /spots_db1 8 Forcedirectio,nologging
s4 /spots_db3 56 Forcedirectio,nologging
s5 /spots_db4 R_disk Forcedirectio,nologging
s6
s7
Disk 1 (146GB)
Slice Mount point Size (GB) UFS mount option

s0 /export/hom 4
e
s1 /var/opt 10
s2
s3 /opt 10
s4 /spots_db2 8 Forcedirectio,nologging
s5 /spots_db5 56 Forcedirectio,nologging
s6 /spots_db6 R_disk Forcedirectio,nologging
s7

Table 11 - Disk Partitioning, Small Configuration C


To define the various mount options (see Table 11, column ‘UFS mount option’), edit the file
‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or re-
mount of each file system, the new mount option will take effect. An example is presented below.

Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging

E200613-01-115-V14.0I-34 35
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

2.4.1.1.4 Small D (4x146GB) Configuration


This is the configuration that will be used on the Sun SPARC Enterprise Server M3000 for V14.

Disk 0 (146GB)
Slice Mount point Size (GB) UFS mount option

S0 /replica1 1
S1 / 18
S2
S3 swap 4
s4 /spots_db1 8 forcedirectio,nologging
s5 /spots_db3 56 forcedirectio,nologging
s6 /spots_db4 R_disk forcedirectio,nologging
s7
Disk 1 (146GB)
Slice Mount point Size (GB) UFS mount option

s0 /replica2 1
s1 /export/home 4
s2
s3 /var/opt 10
s4 /opt 10
s5 /spots_db2 8 forcedirectio,nologging
s6 /spots_db5 56 forcedirectio,nologging
s7 /spots_db6 R_disk forcedirectio,nologging
Disk 2 (146GB)
Slice Mount point Size (GB) UFS mount option

S0 /replica3 1
S1 /root_mirror 18
S2
S3 /swap_mirror 4
s4 /spots_db1_mirror 8 forcedirectio,nologging
s5 /spots_db3_mirror 56 forcedirectio,nologging
s6 /spots_db4_mirror R_disk forcedirectio,nologging
s7
Disk 3 (146GB)
Slice Mount point Size (GB) UFS mount option

s0 /replica4 1
s1 /home_mirror 4
s2
s3 /var_opt_mirror 10
s4 /opt_mirror 10
s5 /spots_db2_mirror 8 forcedirectio,nologging
s6 /spots_db5_mirror 56 forcedirectio,nologging
s7 /spots_db6_mirror R_disk forcedirectio,nologging

36 E200613-01-115-V14.0I-34
Table 12 - Disk Partitioning, Small Configuration D
To define the various mount options (see Table 12, column ‘UFS mount option’), edit the file
‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or re-
mount of each file system, the new mount option will take effect. An example is presented below.

Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

2.4.1.1.5 Medium A (8x73GB / 146GB) Configuration


This configuration will be used with the Sun Fire V445 and it can have 73 Gb disks or 146 Gb
disks.
The values for the partitions sizes for both disks are presented separated by “/”, i.e.:
• <partition size for 73 Gb disk> / <partition size for 146 Gb disk>

 The replica partitions: replica2, replica3, and replica4 MUST be the first ones to
be created in disk 1 (replica2), disk 2 (replica3) and disk 3 (replica4), and they
MUST be created at the beginning of the corresponding disk, i.e., on slice 0.
IMPORTANT NOTES:
• On the Large DB installation types Medium A and Large A, disks Disk0 through Disk7
correspond to the internal drives.
• The partitions which contain the word “mirror” must have the same space in MB as
the corresponding partitions without the word “mirror”.
• If each pair of disks involved in a mirror are NOT of the same model and geometry then
take care of the following:
 Start the disk partitioning by the one with less space (you can know the space of
each disk in the OS-installation partitioning window).
 The partitions on the disk with more space must have at least 3 Megabytes more
than the respective ones in the disk with less space, except for the replica
partitions (partitions that contain the word “replica”).
 If the disk with more space has just one replica partition, then the replica partition
must contain the remaining disk space (instead of 1GB).
 If the disk with more space has two replica partitions, then the remaining replica
partition with 1GB must contain the remaining disk space (instead of 1GB).
• The OS-installer partitioning window can show lack of precision in translation between
Mega Bytes and Cylinders. For each pair of disks making up a mirror, if you notice this
inconsistency, be sure that the mirrored partitions have the same space in Megabytes.

E200613-01-115-V14.0I-34 37
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Internal Disk 0 (73GB / 146 GB)


Slice Mount point Size (GB) UFS mount option
73 / 146
S0 /replica1 1/1
S1 / 20 / 80
S2
S3 swap 8/8
s4 /spots_db1 R_disk forcedirectio,nologging
s5 /spots_rman 2/2
s6
s7
Internal Disk 1 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
s0 /replica2 1/1
s1 /opt 24 / 51
s2
s3 /spots_ db2 14 / 30 forcedirectio,nologging
s4 /export/home R_disk
s5
s6
s7
Internal Disk 2 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
S0 /replica3 1/1
S1 /var/opt R_disk
S2
S3
s4
s5
s6
s7
Internal Disk 3 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
s0 /replica4 1/1
s1 /spots_db3 R_disk forcedirectio,nologging
s2
s3
s4
s5
s6
s7
Internal Disk 4 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
S0 /replica5 1/1
S1 /root_mirror 20 / 80

38 E200613-01-115-V14.0I-34
S2
S3 /swap_mirror 8/8
s4 /spots_db1_mirror R_disk
s5 /spots_rman_mirror 2/2
s6
s7
Internal Disk 5 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
s0 /replica6 1/1
s1 /opt_mirror 24 / 51
s2
s3 /spots_ db2_mirror 14 / 30
s4 /home_mirror R_disk
s5
s6
s7
Internal Disk 6 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
S0 /replica7 1/1
S1 /var_opt_mirror R_disk
S2
S3
s4
s5
s6
s7
Internal Disk 7 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
s0 /replica8 1/1
s1 /spots_db3_mirror R_disk
s2
s3
s4
s5
s6
s7
Table 13 - Disk Partitioning, Medium Configuration A, Internal Disks

E200613-01-115-V14.0I-34 39
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

StorEdge Disks RAID1


Disks Mount point Size (GB) UFS mount option
8,14 /spots_db4 F.D. forcedirectio,nologging
StorEdge Disks RAID10
Disks Mount point Size (GB) UFS mount option
9,10,11,12,13 /spots_db5 F.D. forcedirectio,nologging
15,16,17,18,1 /spots_db6 F.D. forcedirectio,nologging
9
Table 14 - Disk Partitioning, Medium Configuration A, External Disks

To define the various mount options (see Table 13 and Table 14, column ‘UFS mount option’), edit
the file ‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or
re-mount of each file system, the new mount option will take effect. An example is presented below.

Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

2.4.1.1.6 Medium B (2x146GB) Configuration


This configuration will be used with the Sun Fire V490 and it can have only 146 GB disks.
The values for the partitions sizes for both disks are presented separated by “/”, i.e.:

 The replica partitions: replica1 and replica2, MUST BE the first ones to be created
in disk 0 (replica1), disk 1 (replica2), and they MUST BE created at the beginning
of the corresponding disk, i.e., on slice 0.
IMPORTANT NOTES:
• On the Large DB installation types Medium B and Large B, disks Disk0 and Disk1
correspond to the internal drives.
• The partitions which contain the word “mirror” must have the same space in MB as
the corresponding partitions without the word “mirror”.
• If each pair of disks involved in a mirror are NOT of the same model and geometry then
take care of the following:
 Start the disk partitioning by the one with less space (you can know the space of
each disk in the OS-installation partitioning window).
 The partitions on the disk with more space must have at least 3 Megabytes more
than the respective ones in the disk with less space, except for the replica
partitions (partitions that contain the word “replica”).

40 E200613-01-115-V14.0I-34
 If the disk with more space has just one replica partition, then the replica partition
must contain the remaining disk space (instead of 1GB).
 If the disk with more space has two replica partitions, then the remaining replica
partition with 1GB must contain the remaining disk space (instead of 1GB).
• The OS-installer partitioning window can show lack of precision in translation between
Mega Bytes and Cylinders. For each pair of disks making up a mirror, if you notice this
inconsistency, be sure that the mirrored partitions have the same space in Megabytes.

Internal Disk 0 (146 GB)


Slice Mount point Size (GB) UFS mount option
S0 /replica1 1
S1 / 16
S2
S3 swap 8
s4 /export/home 5
s5 /var/opt 20
s6 /opt 15
s7 /spots_db4 R_disk (~75G) forcedirectio,nologging
Internal Disk 1 (146 GB)
Slice Mount point Size (GB) UFS mount option
s0 /replica2 1
s1 /root_mirror 16
s2
s3 /swap_mirror 8
s4 /home_mirror 5
s5 /var_opt_mirror 20
s6 /opt_mirror 15
s7 /spots_db4_mirror R_disk (~75G)
Table 15 - Disk Partitioning, Medium Configuration B, Internal Disks

StorEdge Disks RAID10


Disks Mount point Size (GB) UFS mount option
2,3,…,6,7 /spots_db5 F.D. forcedirectio,nologging
8,9,…,12,13 /spots_db6 F.D. forcedirectio,nologging
Table 16 - Disk Partitioning, Medium Configuration B, External Disks

To define the various mount options (see Table 15 and Table 16, column ‘UFS mount option’), edit
the file ‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or
re-mount of each file system, the new mount option will take effect. An example is presented below.

Example of “/etc/vfstab” without mount options:

/dev/dsk/c3t0d0s7 /dev/rdsk/c3t0d0s7 /spots_db4 ufs 2 yes -

Example of “/etc/vfstab” with forcedirectio,nologging mount option:

/dev/dsk/c3t0d0s7 /dev/rdsk/c3t0d0s7 /spots_db4 ufs 2 yes forcedirectio,nologging

E200613-01-115-V14.0I-34 41
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

2.4.1.1.7 Medium C (4x146GB) Configuration


This configuration will be used with the Sun SPARC Enterprise M3000 and it can have 146 GB or
300 GB disks. Notice that 300 GB disks are only supported when installed in the StorageTek
ST2540.
 The replica partitions: replica1, replica2 through replica4 must be MUST be the first ones to
be created in disk 0 (replica1), disk 1 (replica2) through disk 3 (replica4), and they MUST
be created at the beginning of the corresponding disk, i.e., on slice 0.
IMPORTANT NOTES:
• On the Large DB installation types Medium C and Large C, disks Disk0 through Disk3
correspond to the internal drives.
• The partitions which contain the word “mirror” must have the same space in MB as
the corresponding partitions without the word “mirror”.
• If each pair of disks involved in a mirror are NOT of the same model and geometry then
take care of the following:
 Start the disk partitioning by the one with less space (you can know the space of
each disk in the OS-installation partitioning window).
 The partitions on the disk with more space must have at least 3 Megabytes more
than the respective ones in the disk with less space, except for the replica
partitions (partitions that contain the word “replica”).
 If the disk with more space has just one replica partition, then the replica partition
must contain the remaining disk space (instead of 1GB).
 If the disk with more space has two replica partitions, then the remaining replica
partition with 1GB must contain the remaining disk space (instead of 1GB).
• The OS-installer partitioning window can show lack of precision in translation between
Mega Bytes and Cylinders. For each pair of disks making up a mirror, if you notice this
inconsistency, be sure that the mirrored partitions have the same space in Megabytes.

Internal Disk 0 (146 GB)


Slice Mount point Size (GB) UFS mount option
S0 /replica1 1
S1 / 30
S2
S3 swap 16
s4 /spots_rman 2
s5 /var/opt 50
s6 /opt 30
s7 /export/home R_disk
Internal Disk 1 (146 GB)
Slice Mount point Size (GB) UFS mount option
s0 /replica2 1
s1 /spots_db4 R_disk
s2
s3

42 E200613-01-115-V14.0I-34
s4
s5
s6
s7
Internal Disk 2 (146 GB)
Slice Mount point Size (GB) UFS mount option
S0 /replica3 1
S1 /root_mirror 30 forcedirectio,nologging
S2
S3 /swap_mirror 16
s4 /spots_rman_mirror 2
s5 /var_opt_mirror 50
s6 /opt_mirror 30
s7 /home_mirror R_disk
Internal Disk 3 (146 GB)
Slice Mount point Size (GB) 146 UFS mount option
s0 /replica4 1
s1 /spots_db4_mirror R_disk forcedirectio,nologging
s2
s3
s4
s5
s6
s7
Table 17 - Disk Partitioning, Medium Configuration C, Internal Disks

StorEdge Disks RAID10


Disks Mount point Size (GB) UFS mount option
4,5,6,7,8,9 /spots_db1 F.D. forcedirectio,nologging
/spots_db5
10,11,12,13,14,1 /spots_db2 F.D. forcedirectio,nologging
5 /spots_db6
Table 18 - Disk Partitioning, Medium Configuration C, External Disks

To define the various mount options (see Table 17 and Table 18, column ‘UFS mount option’), edit
the file ‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or
re-mount of each file system, the new mount option will take effect. An example is presented below.

Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

E200613-01-115-V14.0I-34 43
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

2.4.1.1.8 Medium D (4x146GB) Configuration


This configuration will be used with the Sun SPARC Enterprise M3000 and it can have 146 GB,
300 GB or 1TB disks. Notice that 300 GB disks are only supported when installed in the
StorageTek ST2540 and 1TB disks are only supported when installed in the StorageTek ST2501.
 Notice that this configuration is very similar to Medium C (4x146GB) Configuration. The
differences are the the additional Sun StorageTek 2501. This hardware will allow to have 12
x 1TB of disk space for backups. The backups are implemented by SPOTS-BAR.
 The replica partitions: replica1, replica2 through replica4 must be MUST be the first ones to
be created in disk 0 (replica1), disk 1 (replica2) through disk 3 (replica4), and they MUST
be created at the beginning of the corresponding disk, i.e., on slice 0.
IMPORTANT NOTES:
• On the Large DB installation types Medium D and Large D, disks Disk0 through Disk3
correspond to the internal drives.
• The partitions which contain the word “mirror” must have the same space in MB as
the corresponding partitions without the word “mirror”.
• If each pair of disks involved in a mirror are NOT of the same model and geometry then
take care of the following:
 Start the disk partitioning by the one with less space (you can know the space of
each disk in the OS-installation partitioning window).
 The partitions on the disk with more space must have at least 3 Megabytes more
than the respective ones in the disk with less space, except for the replica
partitions (partitions that contain the word “replica”).
 If the disk with more space has just one replica partition, then the replica partition
must contain the remaining disk space (instead of 1GB).
 If the disk with more space has two replica partitions, then the remaining replica
partition with 1GB must contain the remaining disk space (instead of 1GB).
• The OS-installer partitioning window can show lack of precision in translation between
Mega Bytes and Cylinders. For each pair of disks making up a mirror, if you notice this
inconsistency, be sure that the mirrored partitions have the same space in Megabytes.

Internal Disk 0 (146 GB)


Slice Mount point Size (GB) UFS mount option
S0 /replica1 1
S1 / 30
S2
S3 swap 16
s4 /spots_rman 2
s5 /var/opt 50
s6 /opt 30
s7 /export/home R_disk
Internal Disk 1 (146 GB)
Slice Mount point Size (GB) UFS mount option
s0 /replica2 1

44 E200613-01-115-V14.0I-34
s1 /spots_db4 R_disk
s2
s3
s4
s5
s6
s7
Internal Disk 2 (146 GB)
Slice Mount point Size (GB) UFS mount option
S0 /replica3 1
S1 /root_mirror 30 forcedirectio,nologging
S2
S3 /swap_mirror 16
s4 /spots_rman_mirror 2
s5 /var_opt_mirror 50
s6 /opt_mirror 30
s7 /home_mirror R_disk
Internal Disk 3 (146 GB)
Slice Mount point Size (GB) 146 UFS mount option
s0 /replica4 1
s1 /spots_db4_mirror R_disk forcedirectio,nologging
s2
s3
s4
s5
s6
s7
Table 19 - Disk Partitioning, Medium Configuration D, Internal Disks

StorEdge ST2540 Disks RAID10 (12 x 300GB)


Disks Mount point Size (GB) UFS mount option
4,5,6,7,8,9 /spots_db1 F.D. forcedirectio,nologging
/spots_db5
10,11,12,13,14,1 /spots_db2 F.D. forcedirectio,nologging
5 /spots_db6
Table 20 - Disk Partitioning, Medium Configuration D, External Disks

StorEdge ST2501 Disks RAID10 (12 x 1TB)


Disks Mount point Size (GB) UFS mount option
16,17,18,19,20,2 /backup F.D. forcedirectio,nologging
1
22,23,24,25,26,2 backup mirror F.D. forcedirectio,nologging
7
Table 21 - Disk Partitioning, Medium Configuration D, External Disks

E200613-01-115-V14.0I-34 45
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

To define the various mount options (see Table 20 and Table 21, column ‘UFS mount option’), edit
the file ‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or
re-mount of each file system, the new mount option will take effect. An example is presented below.

Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

2.4.1.2 Distributed Configurations

2.4.1.2.1 Large Configuration – DB Server A


This is the hard disk configuration of the Sun Fire V445 for the DB Server and it can have 73 GB
disks or 146 GB disks.
The values for the partitions sizes for both disks are presented separated by “/”, i.e.:
• <partition size for 73 Gb disk> / <partition size for 146 Gb disk>
All the important notes from the Medium Configuration A are also applied to this configuration as
well.

Internal Disk 0 (73GB / 146 GB)


Slice Mount point Size (GB) UFS mount option
73 / 146
S0 /replica1 1/1
S1 / 20 / 50
S2
S3 swap 8/8
s4 /spots_db1 14 / 30 forcedirectio,nologging
s5 /spots_rman R_disk
s6
s7
Internal Disk 1 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
s0 /replica2 1/1
s1 /opt 24 / 30
s2
s3 /spots_ db2 14 / 30 forcedirectio,nologging
s4 /var/opt 20 / 60
s5 /export/home R_disk
s6
s7
Internal Disk 2 (73GB / 146 GB)

46 E200613-01-115-V14.0I-34
Slice Mount point Size (GB) UFS mount option
73 / 146
S0 /replica3 1/1
S1 /spots_db3 R_disk forcedirectio,nologging
S2
S3
s4
s5
s6
s7
Internal Disk 3 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
s0 /replica4 1/1
s1 /spots_db4 R_disk forcedirectio,nologging
s2
s3
s4
s5
s6
s7
Internal Disk 4 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
S0 /replica5 1/1
S1 /root_mirror 20 / 50
S2
S3 /swap_mirror 8/8
s4 /spots_db1_mirror 14 / 30
s5 /spots_rman_mirror R_disk
s6
s7
Internal Disk 5 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
s0 /replica6 1/1
s1 /opt_mirror 24 / 30
s2
s3 /spots_ db2_mirror 14 / 30
s4 /var_opt_mirror 20 / 60
s5 /home_mirror R_disk
s6
s7
Internal Disk 6 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
S0 /replica7 1
S1 /spots_db3_mirror R_disk
S2
S3

E200613-01-115-V14.0I-34 47
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

s4
s5
s6
s7
Internal Disk 7 (73GB / 146 GB)
Slice Mount point Size (GB) UFS mount option
73 / 146
s0 /replica8 1
s1 /spots_db4_mirror R_disk
s2
s3
s4
s5
s6
s7
Table 22 - Disk Partitioning, Large Configuration A, Internal Disks – DB Server

StorEdge Disks RAID1+0


Disks Mount point Size (GB) UFS mount option
8,9,…,18,19 /spots_db5 F.D. forcedirectio,nologging
20,21,…,30,31 /spots_db6 F.D. forcedirectio,nologging

Table 23 - Disk Partitioning, Large Configuration A, External Disks – DB Server


To define the various mount options (see Table 22 and Table 23, column ‘UFS mount option’), edit
the file ‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or
re-mount of each file system, the new mount option will take effect. An example is presented below.
Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

2.4.1.2.2 Large Configuration – DB Server B


This is the hard disk configuration of the Sun Fire V490 for the DB Server and it has only 146 Gb
hard disks.
All the important notes from the Medium Configuration B are also applied to this configuration as
well.

Internal Disk 0 (146 GB)


Slice Mount point Size (GB) UFS mount option
S0 /replica1 1
S1 / 15
S2
S3 swap 8

48 E200613-01-115-V14.0I-34
s4 /export/home 4
s5 /var/opt 10
s6 /opt 14
s7 /spots_db4 R_disk (~88G) forcedirectio,nologging
Internal Disk 1 (146 GB)
Slice Mount point Size (GB) UFS mount option
s0 /replica2 1
s1 /root_mirror 15
s2
s3 /swap_mirror 8
s4 /home_mirror 4
s5 /var_opt_mirror 10
s6 /opt_mirror 14
s7 /spots_db4_mirror R_disk (~88G)
Table 24 - Disk Partitioning, Large Configuration B, Internal Disks – DB Server

StorEdge Disks RAID1+0


Disks Mount point Size (GB) UFS mount option
2,3,…,12,13 /spots_db5 F.D. forcedirectio,nologging
14,15,…,24,25 /spots_db6 F.D. forcedirectio,nologging

Table 25 - Disk Partitioning, Large Configuration B, External Disks – DB Server

To define the various mount options (see Table 24 and Table 25, column ‘UFS mount option’), edit
the file ‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or
re-mount of each file system, the new mount option will take effect. An example is presented below.
Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

2.4.1.2.3 Large Configuration – DB Server C


This is the hard disk configuration of the Sun SPARC Enterprise M3000 for the DB Server and it
can have 146 GB or 300 GB disks. Notice that 300 GB disks are only supported when installed in
the StorageTek ST2540.

All the important notes from the Medium Configuration C are also applied to this configuration as
well.

Internal Disk 0 (146 GB)


Slice Mount point Size (GB) UFS mount option
S0 /replica1 1
S1 / 30
S2

E200613-01-115-V14.0I-34 49
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

S3 swap 16
s4 /spots_rman 2
s5 /var/opt 50
s6 /opt 30
s7 /export/home R_disk
Internal Disk 1 (146 GB)
Slice Mount point Size (GB) UFS mount option
s0 /replica2 1
s1 /spots_db4 R_disk
s2
s3
s4
s5
s6
s7
Internal Disk 2 (146 GB)
Slice Mount point Size (GB) UFS mount option
S0 /replica3 1
S1 /root_mirror 30 forcedirectio,nologging
S2
S3 /swap_mirror 16
s4 /spots_rman_mirror 2
s5 /var_opt_mirror 50
s6 /opt_mirror 30
s7 /home_mirror R_disk
Internal Disk 3 (146 GB)
Slice Mount point Size (GB) 146 UFS mount option
s0 /replica4 1
s1 /spots_db4_mirror R_disk forcedirectio,nologging
s2
s3
s4
s5
s6
s7
Table 26 - Disk Partitioning, Large Configuration C, Internal Disks – DB Server

StorEdge Disks RAID1+0


Disks Mount point Size (GB) UFS mount option
4,5,6,7,8,9 /spots_db1 F.D. forcedirectio,nologging
16,17,18,19,20,2 /spots_db5
1
10,11,12,13,14,1 /spots_db2 F.D. forcedirectio,nologging
5 /spots_db6
22,23,24,25,26,2
7

50 E200613-01-115-V14.0I-34
Table 27 - Disk Partitioning, Large Configuration C, External Disks – DB Server

To define the various mount options (see Table 29 and Table 30, column ‘UFS mount option’), edit
the file ‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or
re-mount of each file system, the new mount option will take effect. An example is presented below.
Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

2.4.1.2.4 Large Configuration – DB Server D


This is the hard disk configuration of the Sun SPARC Enterprise M3000 for the DB Server and it
can have 146 GB, 300 GB or 1TB disks. Notice that 300 GB disks are only supported when
installed in the StorageTek ST2540 and 1TB disks are only supported when installed in the
StorageTek ST2501.

All the important notes from the Medium Configuration D are also applied to this configuration as
well.

Internal Disk 0 (146 GB)


Slice Mount point Size (GB) UFS mount option
S0 /replica1 1
S1 / 30
S2
S3 swap 16
s4 /spots_rman 2
s5 /var/opt 50
s6 /opt 30
s7 /export/home R_disk
Internal Disk 1 (146 GB)
Slice Mount point Size (GB) UFS mount option
s0 /replica2 1
s1 /spots_db4 R_disk
s2
s3
s4
s5
s6
s7
Internal Disk 2 (146 GB)
Slice Mount point Size (GB) UFS mount option
S0 /replica3 1
S1 /root_mirror 30 forcedirectio,nologging
S2

E200613-01-115-V14.0I-34 51
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

S3 /swap_mirror 16
s4 /spots_rman_mirror 2
s5 /var_opt_mirror 50
s6 /opt_mirror 30
s7 /home_mirror R_disk
Internal Disk 3 (146 GB)
Slice Mount point Size (GB) 146 UFS mount option
s0 /replica4 1
s1 /spots_db4_mirror R_disk forcedirectio,nologging
s2
s3
s4
s5
s6
s7
Table 28 - Disk Partitioning, Large Configuration D, Internal Disks – DB Server

StorEdge ST2540 Disks RAID10 (12 x 300GB)


Disks Mount point Size (GB) UFS mount option
4,5,6,7,8,9 /spots_db1 F.D. forcedirectio,nologging
/spots_db5
10,11,12,13,14,1 /spots_db2 F.D. forcedirectio,nologging
5 /spots_db6
Table 29 - Disk Partitioning, Large Configuration D, External Disks – DB Server

StorEdge ST2501 Disks RAID10 (12 x 1TB)


Disks Mount point Size (GB) UFS mount option
16,17,18,19,20,2 /backup F.D. forcedirectio,nologging
1
22,23,24,25,26,2 backup mirror F.D. forcedirectio,nologging
7
Table 30 - Disk Partitioning, Large Configuration D, External Disks

To define the various mount options (see Table 29 and Table 30, column ‘UFS mount option’), edit
the file ‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or
re-mount of each file system, the new mount option will take effect. An example is presented below.
Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

52 E200613-01-115-V14.0I-34
2.4.1.2.5 Large Configuration – Application Server A
This is the hard disk configuration of the Sun Fire V490 for the Application Server.

All the important notes from the Medium Configuration are applied to this configuration as well.

Internal Disk 0 (146GB)


Slice Mount point Size (GB) UFS mount option
S0 /replica1 1
S1 / 20
S2
S3 swap 16
s4 /opt 20
s5 /var/opt 70
s6 /export/home R_disk
s7
Internal Disk 1 (146GB)
Slice Mount point Size (GB) UFS mount option
S0 /replica2 1
S1 /root_mirror 20
S2
S3 /swap_mirror 16
s4 /opt_mirror 20
s5 /var_opt_mirror 70
s6 /home_mirror R_disk
s7
Table 31 - Disk Partitioning, Large Configuration - Application Server

2.4.1.2.6 Large Configuration – Application Server B


This is the hard disk configuration of the Sun SPARC Enterprise M4000 for the Application Server.

All the important notes from the Medium Configuration C are applied to this configuration as well.

Internal Disk 0 (146GB)


Slice Mount point Size (GB) UFS mount option
S0 /replica1 1
S1 / 20
S2
S3 swap 16
s4 /opt 20
s5 /var/opt 70
s6 /export/home R_disk
s7
Internal Disk 1 (146GB)
Slice Mount point Size (GB) UFS mount option
S0 /replica2 1
S1 /root_mirror 20
S2
S3 /swap_mirror 16

E200613-01-115-V14.0I-34 53
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

s4 /opt_mirror 20
s5 /var_opt_mirror 70
s6 /home_mirror R_disk
s7
Table 32 - Disk Partitioning, Large Configuration - Application Server

2.4.2 Legacy Configurations

2.4.2.1 Single Server Configurations

2.4.2.1.1 Legacy Small A1 or Small B1 (3x73GB) Configuration


This is the configuration that will be used on the Sun Fire V240 / V250 from V12 or V13.
Real Time is not supported for configurations with less than 4GB of RAM.

Disk 0 (73GB)
Slice Mount point Size (GB) UFS mount option

S0 / 14
S1 swap 4
S2
S3 /export/home 5
s4 /spots_db1 8 forcedirectio,nologging
s5 /spots_db4 R_disk forcedirectio,nologging
s6
s7
Disk 1 (73GB)
Slice Mount point Size (GB) UFS mount option

s0 /opt 10
s1 /spots_db5 R_disk forcedirectio,nologging
s2
s3 /spots_db6 18 forcedirectio,nologging
s4
s5
s6
s7
Disk 2 (73GB)
Slice Mount point Size (GB) UFS mount option

s0 /var/opt 15
s1 /spots_db2 8 forcedirectio,nologging
s2
s3 /spots_db3 R_disk forcedirectio,nologging
s4
s5

54 E200613-01-115-V14.0I-34
s6
s7
Table 33 - Disk Partitioning, Small Configuration – Type A1 and B1
To define the various mount options (see Table 33, column ‘UFS mount option’), edit the file
‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or re-
mount of each file system, the new mount option will take effect. An example is presented below.

Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

2.4.2.1.2 Legacy Small B2 (4x73GB) Configuration


This is the configuration that will be used on the Sun Fire V440 from V12.

Disk 0 (73GB)
Slice Mount point Size (GB) UFS mount option
S0 / 18
S1 swap 8
S2
S3 /var/opt 10
S4 /opt 25
S5 /export/hom R_disk
e
S6 /spots_rman 2
S7
Disk 1 (73GB)
Slice Mount point Size (GB) UFS mount option
S0 /spots_db1 8 forcedirectio
S1 /spots_db7 R_disk forcedirectio
S2
S3
S4
S5
S6
S7
Disk 2 (73GB)
Slice Mount point Size (GB) UFS mount option
S0 /spots_db2 8 forcedirectio
S1 /spots_db4 R_disk forcedirectio
S2
S3 /spots_db3 20
S4

E200613-01-115-V14.0I-34 55
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

S5
S6
S7
Disk 3 (73GB)
Slice Mount point Size (GB) UFS mount option
S0 /spots_db5 45 forcedirectio
S1 /spots_db6 R_disk forcedirectio
S2
S3
S4
S5
S6
S7
Table 34 - Disk Partitioning, Small Configuration – Type B2

To define the various mount options (see Table 34, column ‘UFS mount option’), edit the file
‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or re-
mount of each file system, the new mount option will take effect. An example is presented below.

Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

2.4.2.1.3 Legacy Medium B1 (2+2x73GB) Configuration


This is the configuration that will be used on the Sun Fire V440 from V12..

 The replica partitions: replica2, replica3, and replica4 must be MUST be the first
ones to be created in disk 1 (replica2), disk 2 (replica3) and disk 3 (replica4), and
they MUST be created at the beginning of the corresponding disk, i.e., on slice 0.
IMPORTANT NOTES:
• On the Medium and Large DB installation types, Disk0 through Disk3 correspond to the
internal drives.
• The partitions which contain the word “mirror” must have the same space in MB as
the corresponding partitions without the word “mirror”.
• If each pair of disks involved in a mirror are NOT of the same model and geometry then
take care of the following:
 Start the disk partitioning by the one with less space (you can know the space of
each disk in the OS-installation partitioning window).
 The partitions on the disk with more space must have at least 3 Megabytes more
than the respective ones in the disk with less space, except for the replica
partitions (partitions that contain the word “replica”).

56 E200613-01-115-V14.0I-34
 If the disk with more space has just one replica partition, then the replica partition
must contain the remaining disk space (instead of 1GB).
 If the disk with more space has two replica partitions, then the remaining replica
partition with 1GB must contain the remaining disk space (instead of 1GB).
• The OS-installer partitioning window can show lack of precision in translation between
Mega Bytes and Cylinders. For each pair of disks making up a mirror, if you notice this
inconsistency, be sure that the mirrored partitions have the same space in Megabytes.

Disk 0 (73GB)
Slice Mount point Size (GB) UFS mount option

S0 /replica1 1
S1 / 10
S2
S3 Swap 8
s4 /var/opt 22
s5 /opt 10
s6 /spots_rman 2
s7 /export/home R_disk
Internal Disk 1 (73GB)
Slice Mount point Size (GB) UFS mount option

s0 /replica2 1
s1 /root_mirror 10
s2
s3 /swap_mirror 8
s4 /var_opt_mirror 22
s5 /opt_mirror 10
s6 /spots_rman_mirror 2
s7 /home_mirror R_disk
Internal Disk 2 (73GB)
Slice Mount point Size (GB) UFS mount option

S0 /replica3 1
S1 /spots_db4 22 forcedirectio
S2
S3 /spots_db3 R_disk forcedirectio
s4
s5
s6
s7
Internal Disk 3 (73GB)
Slice Mount point Size (GB) UFS mount option

s0 /replica4 1
s1 /spots_db4_mirror 22 forcedirectio
s2
s3 /spots_db3_mirror R_disk forcedirectio

E200613-01-115-V14.0I-34 57
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

s4
s5
s6
s7
Table 35 - Disk Partitioning, Legacy Medium Configuration B1

StorEdge Disks NoRAID


Disks Mount point Size (GB) UFS mount option
4 /spots_db1 F.D. forcedirectio
10 /spots_db2 F.D. forcedirectio
StorEdge Disks RAID1+0
Disks Mount point Size (GB) UFS mount option
5,6,7,8,9 /spots_db5 F.D. (link) forcedirectio
11,12,13,14,1 /spots_db6 F.D. (link) forcedirectio
5
Table 36 - Disk Partitioning, Legacy Medium Configuration B1 – External Disks

To define the various mount options (see Table 35 and Table 36, column ‘UFS mount option’), edit
the file ‘/etc/vfstab‘ and insert the parameter in the last column of each line. After the next reboot or
re-mount of each file system, the new mount option will take effect. An example is presented below.

Example of ‘/etc/vfstab‘ without mount options:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes -


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes -

Example of ‘/etc/vfstab‘ with forcedirectio,nologging mount option:

/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /spots_db2 ufs 2 yes forcedirectio,nologging


/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /spots_db3 ufs 2 yes forcedirectio,nologging

58 E200613-01-115-V14.0I-34
2.4.2.2 Legacy Distributed Configurations

2.4.2.2.1 Legacy Large B1 Configuration – DB Server


This is the hard disk configuration of the Sun Fire V440 from V12 or V13 for the DB Server.
All the important notes from the Legacy Medium B1 Configuration are applied to this configuration
as well.

Internal Disk 0 (73GB) RAID1


Slice Mount point Size (GB) UFS mount option
S0 /replica1 1
S1 / 20
S2
S3 swap 8
s4 /spots_rman 2
s5 /export/home R_disk
s6
s7
Internal Disk 1 (73GB) RAID1
Slice Mount point Size (GB) UFS mount option
s0 /replica2 1
s1 /root_mirror 20
s2
s3 /swap_mirror 8
s4 /spots_rman_mirror 2
s5 /home_mirror R_disk
s6
s7
Internal Disk 2 (73GB) RAID1
Slice Mount point Size (GB) UFS mount option
S0 /replica3 1
S1 /var/opt 52
S2
S3 /opt R_disk
s4
s5
s6
s7
Internal Disk 3 (73GB) RAID1
Slice Mount point Size (GB) UFS mount option
s0 /replica4 1
s1 /var_opt_mirror 52
s2
s3 /opt_mirror R_disk
s4
s5

E200613-01-115-V14.0I-34 59
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

s6
s7
Table 37 - Disk Partitioning, Legacy Large Configuration B1, Internal Disks – DB Server

StorEdge Disks RAID1+0


Disks Mount point Size (GB) UFS mount option

/spots_db1 F.D. forcedirectio


6,8,10,12 /spots_db2 F.D. forcedirectio
/spots_db3 F.D. forcedirectio
7,9,11,13 /spots_db4 F.D. forcedirectio
4,5,20 … 27 /spots_db5 F.D. forcedirectio
14 .. 19 /spots_db6 F.D. forcedirectio
Table 38 - Disk Partitioning, Large Configuration B1, External Disks – DB Server

2.4.2.2.2 Large Configuration – Application Server


This is the hard disk configuration of the Sun Fire V440 from V12 or V13 for the Application
Server.

All the important notes from the Legacy Medium B1 Configuration are applied to this configuration
as well.

Internal Disk 0 (73GB) RAID1


Slice Mount point Size (GB) UFS mount option
S0 /replica1 1
S1 / 20
S2
S3 swap 8
s4 /export/home R_disk
s5
s6
s7
Internal Disk 1 (73GB) RAID1
Slice Mount point Size (GB) UFS mount option
s0 /replica2 1
s1 /root_mirror 20
s2
s3 /swap_mirror 8
s4 /home_mirror R_disk
s5
s6
s7
Internal Disk 2 (73GB) RAID1
Slice Mount point Size (GB) UFS mount option
S0 /replica3 1
S1 /var/opt 52

60 E200613-01-115-V14.0I-34
S2
S3 /opt R_disk
s4
s5
s6
s7
Internal Disk 3 (73GB) RAID1
Slice Mount point Size (GB) UFS mount option
s0 /replica4 1
s1 /var_opt_mirror 52
s2
s3 /opt_mirror R_disk
s4
s5
s6
s7
Table 39 - Disk Partitioning, Legacy Large Configuration B1, Internal Disks – AS Server

E200613-01-115-V14.0I-34 61
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

3 Installation Procedure Overview

This chapter describes how to carry out the SPOTS installation.

3.1 Preparing for the SPOTS Installation


To prepare for the SPOTS Installation, it is necessary to carry out the following tasks.

3.1.1 Obtaining SPOTS Licenses


Prior to starting the SPOTS Installation procedure, make sure you have:
• a valid SPOTS License Key, covering all desired features
• a set of valid TP License Keys covering all desired TPs
For details on ordering licenses see [2], Section 1.4 (Licensing).

3.1.2 Need for Upgrade or Data Migration


Determine whether an Upgrade or Data Migration procedure is required.
The upgrade procedure allows the upgrade of an existing SPOTS installation (belonging to a
previous SPOTS V12 or V13 version) into a SPOTS V14 installation.
The upgrade procedure consists of two types of upgrade, depending on the conditions available
from the previous SPOTS V13 or V12 systems, as is described in the next 3 sections:

3.1.2.1 Upgrade overview

The following tables present the available upgrades types:

3.1.2.1.1 Upgrade from V12 to V14

HW Scenario

Upgrade Type Output Released Same Expansion New


Uninstall Disk Not Yet Released YES YES YES
DB
or
V12 to V14

Tape YES YES YES YES


New DB

Keep DB Disk YES YES N/A N/A

62 E200613-01-115-V14.0I-34
3.1.2.1.2 Upgrade from V13 to V14

HW Scenario

Upgrad
Type Output Released Same Expansion New
e
Uninstall Disk YES YES YES YES
DB
or
V13 to V14

Tape YES YES YES YES


New DB

Keep DB Disk YES YES N/A N/A

3.1.2.2 Upgrade using existing HW

In this type of upgrade the existing SPOTS V12 or V13 system will be completely upgraded to a
SPOTS V14 system, reusing the existing HW.
The upgrade procedure will render the existing SPOTS V12 or V13 system unasable during the
duration of the upgrade process, and only when the upgrade is finished the system will be ready to
be operated as a SPOTS V14 system.

3.1.2.3 Upgrade using new HW

In this type of upgrade the existing SPOTS V12 or V13 system will be kept on operation, while a
SPOTS V14 system is installed in the new HW.
An external repository for exporting the data of the SPOTS V12 or V13 system is needed in order to
serve as a source for importing the data to the SPOTS V14 system in the new HW.
Only after the upgrade process is finished in the SPOTS V14 system, the existing SPOTS V12 or
V13 system will be took out of operation.

The Data Migration procedure only applies for the case of a SPOTS V14 HW upgrade, as
described in the next section:

3.1.2.4 Data Migration

The Data Migration procedure allows transferring data from an existing SPOTS V14 installation
into a new SPOTS V14 installation for the case of a SPOTS V14 HW upgrade.
This is applicable in the case of expansion of the HW configurations:
• Small ( rack ) to Medium

E200613-01-115-V14.0I-34 63
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

• Medium to Large

3.1.3 Consulting Product Release Notes


Before proceeding, refer to [2] for possible recommendations on SPOTS Installation.

3.1.4 Collecting Information


Before installing SPOTS, make sure to identify:
• The desired type of installation (Single Server / Distributed) for the SPOTS Server(s) (see
Section 2.1);
• The desired Disk Partitions Scheme for the SPOTS Server(s) (see Section 2.4);
• For components which may be installed alternatively on Solaris or Windows, which
platform is to be used (see Section 2.1);
Note: SPOTS-PMC installations can be added later on, as needed.
Additionally, collect parameterization information to be used during the installation — see Annex 5.

3.2 Installation Tasks


In order to perform the installation of SPOTS, for each type of installation presented below, the
corresponding steps have to be carried out, in the sequence described in the flowcharts of the next
sections:

 Initial Installation of a SPOTS V14 System

 Upgrade on Existing Hardware from SPOTS V12 or V13 System

 Migrating to New Hardware from a SPOTS V12 or V13 System

 SPOTS V14 Software Upgrade

 SPOTS V14 Hardware Upgrade

 Upgrade from SPOTS V12 System


Before upgrading, ensure that only the core V14 is installed, ie no patch installed. After
completed the upgrade, should be installed the respective PathSet of the V14.
 Upgrade from SPOTS V13 System
Before upgrading, ensure that the machine there is the last patch V13 (fully updated).

In case new hardware is received from NSN logistic center the SPOTS software should be pre-
installed. In such situation please start from “Installing SPOTS Patches” step, as described in the
next flow chart.

64 E200613-01-115-V14.0I-34
3.2.1 Initial Installation of a SPOTS V14 System

Initial Installation of a SPOTS V14 System

Installing Specific Hardware


Step 1

Installing Standard Software


Step 2

Installing SPOTS Software on Solaris


Step 3

Installing SPOTS RTA on Solaris


Step 4

Installing SPOTS PMC on Windows


Step 5

Installing SPOTS Patches


Step 6

Installing SPOTS TPs


[from “/” directory]
Step 7

Installing Virtual X Server


Step 8

Reboot the System


Step 9

End

Figure 3, Initial Installation of a SPOTS V14 System

E200613-01-115-V14.0I-34 65
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

1. Installing Specific Hardware


The procedure to install the required hardware components is out of the scope of this
manual. Refer to the hardware documentation.

2. Installing Standard Software


This procedure must be repeated on every SPOTS PMS host to be installed.
Perform the steps in the following Chapters depending on the applicable scenario:

A. Initial Installation of a SPOTS V14 system


 Chapter 5 (Installing SUN Solaris 10)
 Chapter 6 (Fault Tolerance with Disk Mirroring, if applicable)
 Chapter 7 (SPOTS Configurations with External Storage, if applicable)
 Chapter 8 (Installing Oracle Software).
 On a distributed environment the installation of Oracle Software must be
only performed on the SDS system.
B. Hardware Dependent
If the initial configuration of the SPOTS V14 desired configuration does not fit in
the cases described in 2.3.1-SPOTS PMS, PMC and RTA (Solaris
environment) please contact your local Nokia Siemens Networks representative.

3. Installing SPOTS Software on Solaris Environment


 When migrating more than 15 days of PM detailed data into a new SPOTS database
with partitioning option, it is required to configure the newly installed SDS before its
first execution. During SPOTS installation, be alert to the instruction to configure the
“sds.cfg” file. Any non-conformity may result in loss of PM detailed data.

 If the SPOTS V14 system configuration is distributed, i.e, with an Application Server
(AS) and a Database Server (DS), then proceed to section 3.2.6 - Installation of
Oracle Instant Client in Application Server (AS) machine after having completed
the steps in Chapter 9. Return again to this section at the end.
 If the SPOTS V14 system configuration is Distributed then group dba must be
created in the Application Server and the user root must be added to it, to login as
root and do this execute:
# groupadd dba
To add root to that group edit the file /etc/group and append “root” at the end of dba
line:
Example:

dba::10011:root

To changes take effect logout and login as root.
 If the SPOTS V14 system configuration is Distributed on the spotsAS installation the
“Please enter below the IP-Address of the Database Server” must be the exact
hostname of the Database Server and it should be added to the /etc/hosts file on the
Application Server.

66 E200613-01-115-V14.0I-34
This procedure must be repeated on every SPOTS PMS host to be installed. Perform the
steps in Chapter 9.
4. Installing the SPOTS Real-Time Agency Software on Solaris Environment
(Applied if Real-Time functionality is desired and the Agency was not installed on the Server)
This procedure must be repeated on every SPOTS RTA to be installed on Solaris.
Perform the steps in Chapter 10.
5. Installing SPOTS PMC Software on Windows Environment
This procedure must be repeated on every SPOTS PMC to be installed on Windows.
Perform the steps in Chapter 10.
6. Installing SPOTS patches
This procedure must be repeated on every SPOTS PMS host to be installed.

 Check which SPOTS patches are released and obtain them. These patches are found
on the SPOTS V14.0 Appl Patches DVD and important information about the patches
is found on the Patches Release Notes.
To know if you have the latest patches released you should contact SPOTS TPS
(Technical Support) or go to IMS homepage (ims.icn.siemens.de) and follow the links:
Enterprise -> Mobile Networks (Com MN) -> Products & Solutions -> Technical
Support -> O&M -> SPOTS -> SPOTS Release documentation -> Public – SPOTS
V14

 Install SPOTS patches following each patch specific instructions.

7. Installing SPOTS TPs


This procedure must be repeated on every SPOTS PMS host to be installed.

 Proceed with the installation of the latest SPOTS V14 TPs that are located in the
SPOTS V14 TPs distribution DVD (Technology Plug-Ins for Solaris) under the root “/”
directory.

 See Chapter 11 - Technology Plug-Ins (TPs)..

8. Installing Virtual X Server


You must install the Virtual X Server:

 Login as root user and insert the SPOTS Performance Management V14.0 Core DVD

 Execute the following command to install the Virtual X Server:


# /cdrom/cdrom0/Xvfb/install

 Remove the SPOTS Performance Management V14.0 Core DVD with the commands:
# cd /
# eject cdrom

 Edit the $SPOTS_DIR/sas.cfg file and add the following line:


VirtualClientDisplay=:9

9. Reboot the system

E200613-01-115-V14.0I-34 67
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Execute, as root user, the following command:


# /etc/shutdown -y -g0 -i6

 The database partitioning is used to remove automatically the PM detailed data. If


activated, 15 days of detailed data (with granularity bigger than 5 minutes) are stored
by default.
 If the period to be defined for the SPOTS V14 system is not the default, this
setting must be changed.
Change the properties NumberDaysInDetailPartition_86400 (for data with granularity
bigger than 5 minutes) of the “sds.cfg” file (refer to Annex 3).

 When the changes to “sds.cfg” file are finished stop and start SPOTS as described in
Sections 4.1 and 4.2.

68 E200613-01-115-V14.0I-34
3.2.2 Upgrade on Existing Hardware from SPOTS V12 or V13 System

Upgrade on Existing Hardware from SPOTS V12


or V13 System

Get System Information &


Requirements Upgrade SPOTS TPs
Step 1 Step 9

Solaris Pre-Upgrade Install New SPOTS TPs


Step 2 Step 10

Solaris Upgrade Reboot the System


Step 3 Step 11

Upgrade to Oracle 10.2.0.3 End


Enterprise Edition
Step 4

Upgrade SPOTS Software on Solaris


Step 5

Install SPOTS Patches


Step 6

Upgrade Database to V13 Level


Step 7

Restore User Parameters


Step 8

Figure 4, Upgrade on Existing Hardware from SPOTS V12 or V13 System

E200613-01-115-V14.0I-34 69
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 If WebReports is installed in the system, please consult the


WebReports/WebPortal documentation for specific upgrade instructions
before start the SPOTS upgrade.

 If SPOTS-BAR is installed in the system, please uninstall SPOTS-BAR using


V13/V14 BAR uninstall guide. Uninstall Legato Networker also according the
same manual.

 If an upgrade is being made from SPOTS V13 mobile, at least the SPOTS V13
mobile TP’s Version 69 must be installed.

Using as a guideline the above flowchart, follow the steps depicted below, to upgrade a SPOTS
V12/V13 System to a SPOTS V14 System using the existing hardware from SPOTS V12 / V13.

In the following steps there is an indication if they are to be executed in the hosts:

• Application Server ( Only AS )


• Database Server ( Only DS )
• Application Server and Database Server ( Both AS & DS )

If inside a step there is the need to refer an exception to the initial indication it will be indicated
explicitely.

If the system is a Single Server Environment then the Application Server host is the same as the
Database Server host.

1. Get System Information & Requirements ( Both AS & DS )

 Before upgrading the system to Solaris 10 10/08 gather some information about
your system

 Login as root user.

 Current IP, execute the command:


# cat /etc/hosts

 Current Netmask , execute the command:


# cat /etc/netmasks

 Default Route, execute the command:


# cat /etc/defaultrouter

 Active Network Interface, execute the command:


# ifconfig –a

 Hostname, execute the command:

70 E200613-01-115-V14.0I-34
# cat /etc/nodename

 The minimum system requirements for a Solaris upgrade are presented in the
following table:

System Requirements
Type Value
Memory 512 Mb
Swap 512 Mb
CPU 200 MHz
/ (root FS) 30% Free Space
/opt 4GB Free Space

2. Solaris Pre-Upgrade ( Both AS & DS )

 Before starting to upgrade your system it is highly recommended to backup your


existing file systems. A full backup will prevent against data loss, damage or
corruption.
 The first thing to deinstall is the TPs that are installed in the SPOTS system, but
only for the RTS and SAS subsystems. You should open the SPOTS TPs
Framework and open the “Set configuration” window (see the figure below). In the
“TP Options” pane, on the “On Remove” column, unselect the checkbox for the
SDS subsystem (only applicable to the DS host) and leave the other subsystems
checked. See Chapter 11 - Technology Plug-Ins (TPs) or the TPs Help, for more
details, if needed.

 Next, start to deinstall all the TPs that are installed in the SPOTS system. See
Chapter 11, Technology Plug-Ins (TPs) or the TPs Help, for more details, if
needed.
 Next, deinstall the SPOTS Patches that were applied to the SPOTS system.
Please refer to the patches specific instructions in order to deinstall them.

E200613-01-115-V14.0I-34 71
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Before upgrading to Solaris 10 10/08 it is necessary to remove the existing RAID1


configuration from root (/) file system like described in the next steps. After a
complete upgrade the RAID1 configuration will be rebuilt.

 Upgrade Steps

 Find out the metadevice for root FS. In this example the metadevice is d100:
# df -k /
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d100 10082492 1544304 8437364 16% /

 Get informations about the root FS submirrors and related disk devices with metastat –p
<metadevice>. Continuing to use previous example:
# metastat -p d100
d100 -m d101 d102 1
d101 1 1 c1t0d0s1
d102 1 1 c1t1d0s1

 Remove the second submirror that is not being upgraded with the command metadetach
<metadevice> <submirror>. Continuing to use the example:
# metadetach d100 d102

 Now revert to using the appropriate physical device to be upgraded with metaroot
<phys.device>. Again using the example:
# metaroot /dev/dsk/c1t0d0s1

 After making backup of /etc/vfstab it is required to comment all lines containing spots_db* in
/etc/vfstab except those that are metadevices (devices that have md string in the path to the
device), in this example /dev/md/dsk/md50 and /dev/md/dsk/md60. (Note: In
Large/distributed installation, applies only to DS.).
# cp /etc/vfstab /etc/vfstab.orig
# vi /etc/vfstab
/dev/md/dsk/d50 /dev/md/rdsk/d50 /spots_db3 ufs 2 yes
-
/dev/md/dsk/d60 /dev/md/rdsk/d60 /spots_db4 ufs 2 yes
-
#/dev/dsk/c4t0d0s6 /dev/rdsk/c4t0d0s6 /spots_db1 ufs 2 yes
-
#/dev/dsk/c4t0d1s6 /dev/rdsk/c4t0d1s6 /spots_db2 ufs 2 yes
-
#/dev/dsk/c4t0d2s6 /dev/rdsk/c4t0d2s6 /spots_db5 ufs 2 yes
-

72 E200613-01-115-V14.0I-34
#/dev/dsk/c4t0d3s6 /dev/rdsk/c4t0d3s6 /spots_db6 ufs 2 yes
-
~
~
:wq!

 Remove the swap mirror metadevice & submirrors with the following sequence of
commands:
# swap -l

The first column off the output will have a device like for example:
/dev/md/dsk/md90

 Now check which devices are part of that swap metadevice, by executing the following
command (using output from previous example):
# metastat –p d90
d90 -m d91 d92 1
d91 1 1 c1t0d0s3
d92 1 1 c1t1d0s3

 Remove the first concat stripe submirror from the mirror. (Continuing to use previous
example output):
# metadetach d90 d92

 Edit the /etc/vfstab file and uncomment the following line. Again using the example:
/dev/dsk/c1t0d0s3 - - swap - no -

 Edit the /etc/vfstab file and comment the following line. Again using the example:
#/dev/md/dsk/d90 - - swap - no -

 Now you need to reboot the machine:


# reboot

 Login as root user and insert the SPOTS Performance Management V14.0 Core DVD

 Execute the following commands:


# cd /cdrom/cdrom0/diskman/OSandBRmirror
# cp unmount.md.ksh /var/tmp/
# chmod +x /var/tmp/unmount.md.ksh
# cd /var/tmp/

E200613-01-115-V14.0I-34 73
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

# sh unmount.md.ksh

 After running the unmount.md.ksh script edit /etc/vfstab and comment the lines of spots_db
metadevices (devices that have md in the path to the device), the raw devices where
comented previously.

74 E200613-01-115-V14.0I-34
3. Solaris Upgrade ( Both AS & DS )

 Make sure you are connected to system console. Shutdown the system and insert the
Solaris 10 10/08 Software DVD.
# shutdown –i0 –g0 -y

This step describes how to upgrade your Solaris 10 OS to Solaris 10 10/08 OS, which is required
by all SPOTS V14 components. The description is DVD oriented. This upgrade procedure will take
about 60 minutes.

 At ok prompt enter the following command:


ok boot cdrom

 Select English as the Solaris Installer language:


“0” for English.

 Select the terminal type “DEC VT100”:


“3” for DEC VT100

 Press “F2” to continue

 Press “F2” to continue

 Set Network Connectivity to:


“Yes”

 Select Network Interface (if applicable), for example:


“bge0”

 If asked, select “Use Dynamic Host Configuration Protocol (DHCP)”:


“No”

 Choose the Host Name that will identify this system on the network, for example:
“pms01”

 Enter the Internet Protocol (IP) Address for this system, for example:
“141.29.135.17”

 System part of a subnet::


“Yes”

 Specify the Netmask of your subnet, or accept the default value, for example:
“255.255.255.128”
 Do not accept the default Netmask unless you are sure it is correct for your subnet.

E200613-01-115-V14.0I-34 75
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Enable IPv6:
“No”

 Set the Default Route. If you know the IP address to the default gateway select “Specify one”,
otherwise select “Detect one upon reboot”
o Input the Default Router IP Address, if you have chosen to specify

 Confirm the information by pressing “F2” to continue or “F4” to change the information.

 Configure Kerberos Security:


“No”

 Confirm the information by pressing “F2” to continue or “F4” to review the information.

 Select the Name Service that will be used by this system:


“None”

 Confirm the information by pressing “F2” to continue or “F4” to review the information.

 Choose “Use the NFSv4 domain derived by the system”

 Select the Time Zone


o Select the Continent, then the Country, select specific Region if prompted and confirm
the Date values

 Confirm the information by pressing “F2” to continue or “F4” to review the information.

 Select “Yes” for the Remote Services.

 Type the alphanumeric string to be used as root password and confirm it (press Enter after
typing the password in each field).

 System identification is complete.

 Select Type of Install:


“Standard”

 Select to automatically eject the DVD:

“Automatically eject CD/DVD”

 Reboot After Installation:

“Auto Reboot”

 Select Initial Installation (only applicable, if the Solaris installation program detects that the
system is upgradeable):
“Upgrade”

76 E200613-01-115-V14.0I-34
 If found multiples internal disks with Solaris, then you have to choose the version to upgrade.
Default is the recent boot disk, for example:
“Solaris 10” on Slice c1t2d0s0

 Patch Analysis:
“Press F2 to continue”

 Read and Accept License to continue installation:

“Accept License”

 Select Geographic Regions, for which support should be installed. Example:

“Southern Europe” > “Portugal”

 Select System Locale:

Select the Local more appropriate to your location. Example:

“Portugal (ISO8859-15 - Euro)”

 Select Products :

Don’t select any product and press “F2” to continue

 Additional Products:
“None”

 Customize Software?
“Press F2 to continue”

 Profile. Check the displayed information.


“Upgrade”.

 Configure Keyboard Layout:


Choose “Portuguese”.

 The Solaris Upgrade begins. It may take around 60 minutes to finish.

 The Solaris 10 Software DVD is ejected.

 System reboot is automatically initiated.

 After logging in to the system with root, it is recommended to enable the solaris volume
daemon by running the following command:
# svcadm enable smserver

 This command has to be executed only once.

E200613-01-115-V14.0I-34 77
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 MEDIUM & LARGE INSTALLATIONS: perform the following steps in the case of a
Medium or Large Installation.

 Begin Medium & Large Upgrade Steps

 It is required to recreate now the RAID1 configuration of the root (/) disk. Setup the
root mirror with metaroot <metadevice>. Use the metadevice which was detected
before starting the upgrade. In our example d100:
# metaroot d100

 Edit the /etc/vfstab and uncomment the lines containing spots_db*. Then reboot the
system.

 Now attach the second submirror into the root metadevice (in this example d102).
# metattach d100 d102

 Now attach the second submirror into the swap metadevice. Using previous
example of page 60:
# metattach d90 d92

 Uncomment the metadevice for swap in /etc/vfstab and comment the physical hard
disk for swap. Using previous example of page 60:
# /dev/dsk/c1t0d0s3 - - swap - no -
/dev/md/dsk/d90 - - swap - no -

 Execute the script which will recreate metadevices for the remaining partitions: /opt,
/var/opt, export/home, /spots_rman and in the case of a database server, will also
create /spots_db1, /spots_db2, /spots_db3 and /spots_db4.
# /var/tmp/sol.upgd.dir/recreate.mirrors.ksh

 Now edit /etc/vfstab and:


1. Uncomment the lines of the metadevices (for example: /dev/md/dsk/md50).
2. Comment the lines off the raw devices (for example: /dev/dsk/c1t0d0s7).
3. On Database Servers uncomment the lines belonging to /spots_db5 and
/spots_db6 (these will be always raw devices like /dev/dsk/c3t0d0s6).
The final result should be similar to this:
/dev/md/dsk/d50 /dev/md/rdsk/d50 /spots_db3 ufs 2 yes -
/dev/md/dsk/d60 /dev/md/rdsk/d60 /spots_db4 ufs 2 yes -
/dev/md/dsk/d120 /dev/md/rdsk/d120 /var/opt ufs 2 yes -
/dev/md/dsk/d70 /dev/md/rdsk/d70 /export/home ufs 2 yes -
/dev/md/dsk/d80 /dev/md/rdsk/d80 /opt ufs 2 yes -

78 E200613-01-115-V14.0I-34
/dev/md/dsk/d130 /dev/md/rdsk/d80 /spots_rman ufs 2 yes -
#/dev/dsk/c1t0d0s7 /dev/rdsk/c1t0d0s7 /export/home ufs 2 yes -
#/dev/dsk/c1t1d0s7 /dev/rdsk/c1t1d0s7 /home_mirror ufs 2 yes -
#/dev/dsk/c1t0d0s5 /dev/rdsk/c1t0d0s5 /opt ufs 2 yes -
#/dev/dsk/c1t0d0s4 /dev/rdsk/c1t0d0s4 /var/opt ufs 2 yes -
/dev/dsk/c1t0d0s6 /dev/rdsk/c1t0d0s6 /spots_rman ufs 2 yes -

 Now issue the reboot command:


# reboot

 End Medium & Large Upgrade Steps

E200613-01-115-V14.0I-34 79
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

4. Upgrading Oracle 10 Enterprise Edition ( Only DS )

 Login as root user and insert the SPOTS Performance Management V14.0 Core DVD

 Execute the following command:


# /cdrom/cdrom0/upgrade/v14/sameHW/cpfiles.sh

 Remove the SPOTS Performance Management V14.0 Core DVD.

 Stop all SPOTS services (see 4.1-Stopping SPOTS).

 Execute the following command


# . /etc/spotsenv

 Run the preUpgrade script with option <backup> to shutdown database and backup
database related files:
# $SPOTS_DIR/upgrade_db/preUpgrade.sh backup

 Now remove Oracle


In the case of upgrading from a Spots V12 installation, do:
# pkgrm ORAaddon
(…)
# pkgrm ORAserver
(…)

In the case of upgrading from a Spots V13 installation, do:


# pkgrm ORA10EE

 After uninstalling the software remove the directory /opt/oracle:


# rm –r /opt/oracle

 Now install Oracle 10 Enterprise Edition as it is described in Chapter 8 - Installing Oracle


Software

 Warning: at the end of oracle script install.sh , the following errors messages appear
in the output
Ln: cannot create /etc/rc2.d/S99dbora
Ln: cannot create /etc/rc0.d/K10dbora

Please, ignore them.

 Stop all SPOTS services (see 4.1-Stopping SPOTS).

 Execute the following command


# . /etc/spotsenv

80 E200613-01-115-V14.0I-34
 After succesfully installation run again the preUpgrade script now with option <restore>:
# $SPOTS_DIR/upgrade_db/preUpgrade.sh restore

 Now upgrade the SPOTS database, login as user oracle and run the following script:
# su – oracle
$ ORACLE_SID=spot; export ORACLE_SID
$ sqlplus “/AS SYSDBA”
(…)
SQL> startup upgrade
(…)
SQL> @$ORACLE_HOME/rdbms/admin/catupgrd.sql
(…)

 This step will last about 30 minutes. At the end confirm in the screen output the following
message:

Oracle Database 10.2 Upgrade Status Utility 09-046-2006 15:05:59

Component Status Version HH:MM:SS

Oracle Database Server VALID 10.2.0.3.0 00:30:39

Total Upgrade Time: 00:30:39

PL/SQL procedure successfully completed.

 Now restart the database and run the following script:


SQL> shutdown immediate
(…)
SQL> startup
(…)
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql
(…)
SQL> shutdown immediate
(…)
SQL> quit

 If the SPOTS system configuration is Distributed, i.e, with an Application Server (AS) and a
Database Server (DS), then the following sub-steps must be carried out ONLY in the
Application Server (AS) and ONLY if Oracle Client is installed in the machine:

 Remove Oracle Client Software:


# pkgrm ORAaddon

# pkgrm ORAclient

E200613-01-115-V14.0I-34 81
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Verify that no entries remain in the system for Oracle, namely directories in
/opt/oracle or /var/oracle. If they exist remove them.

 Remove the oracle user, if it exists, by executing the command as root:


# userdel –r oracle

 Now install the Oracle Instant Client like described in section 3.2.6 - Installation of
Oracle Instant Client in Application Server (AS) machine. Return again to this
section at the end

5. Upgrade SPOTS V14 ( Both AS & DS )

 Execute all the steps described in in Section 3.2.8 in order to backup all relevant
user parameters.

The next procedure must be repeated on every SPOTS PMS host to be installed.

 Insert the SPOTS Performance Management V14.0 Core DVD, run the
spots_installer and choose “Upgrade to V14 Same Hardware”. See also Chapter 9.

 The spots_installer will remove the old SPOTS software and afterwards install SPOTS V14.
On the Database Server the packages spotsRTDB and spotsDB will also be uninstalled
without removing the database.

 Answer “yes” to all the queries when spots packages are being removed.

 When migrating more than 15 days of PM detailed data into a new SPOTS
database with partitioning option, it is required to configure the newly installed SDS
before its first execution. During SPOTS installation, be alert to the instruction to
configure the “sds.cfg” file. Any non-conformity may result in loss of PM detailed
data.

 After a succesfull installation a reboot is required. Execute, as root user, the following
command:
# /etc/shutdown -y -g0 -i6

6. Install SPOTS Patches ( Both AS & DS )

This procedure must be repeated on every SPOTS PMS host to be installed.

 Check which SPOTS patches are released and obtain them. These patches are found on the
SPOTS V14.0 Appl Patches DVD and important information about the patches is found on
the Patches Release Notes.

 To know if you have the latest patches released you should contact SPOTS TPS (Technical
Support) or go to IMS homepage (ims.icn.siemens.de) and follow the links:

82 E200613-01-115-V14.0I-34
Enterprise -> Mobile Networks (Com MN) -> Products & Solutions -> Technical Support ->
O&M -> SPOTS -> SPOTS Release documentation -> Public – SPOTS V14

 Install SPOTS patches following each patch specific instructions.

7. Upgrade Database to V14 Level ( Only DS )

 Login as root user.

 Execute the following script to upgrade the database. Depending on the amount of loaded
data in the database this step could last several hours.
# . /etc/spotsenv
# /etc/init.d/initSpotsPMS stop
# $SPOTS_DIR/upgrade_db/upgrade_SameHW.sh
(…)

 Logs resulting from the previous scripts can be found in:


$SPOTS_DIR/upgrade_db/logs

 If the following error message appears:


Error on starting listener:
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
while executing the previous script please do the following steps to workaround this Oracle error on
starting the listener:

 Edit and change the /etc/hosts file to include the following line:
# 127.0.0.1 localhost.localdomain localhost

 Rename the file ons.config.


# cd $ORACLE_HOME/opmn/conf
# mv ons.config ons.config.orig

 Remove the /var/tmp/.oracle directory, e.g. run the following from the root.
# rm -rf /var/tmp/.oracle

 Execute again the script:


# $SPOTS_DIR/upgrade_db/upgrade_SameHW.sh

8. Restore User Parameters ( Both AS & DS )

 Do not restore file /etc/spotsenv.

E200613-01-115-V14.0I-34 83
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Restore the user parameters that were saved in Step 5. This procedure is applied on the installed
SPOTS Server that contains the SPOTS Database (DB Server) and also on the SPOTS Server that
contains the Application Server (AS Server).

 Perform the steps described in Section 3.2.9, with the exception of the steps described in
Section 3.2.9.2.2 - Merge of virtual entities.

 Start all SPOTS services (see 4.2-Starting SPOTS ).

9. Upgrade SPOTS TPs ( Only DS )

At this stage of the upgrade process, the TPs that existed in the SPOTS V12 system were
converted to intermediate SPOTS V14 TPs in order to be prepared to be upgraded with the SPOTS
V14 TPs.

Some of the existing V12 TPs were actually “merged” and transformed into V14 TPs, but all the
relevant trafiic data was preserved in the database.

 Check that the package p140119-2 was installed, login as root and execute the following
command:

# pkginfo | grep -i p140119-2


application p140119-2 PM-SPOTS-V1401-patch-19-rel-2

If the above command returned the same output, then the patch was installed.
If the patch was not installed no output will appear (output where the patch was not installed
bellow):

# pkginfo | grep -i p140119-2


#

If the patch was not installed go to Figure 4, Upgrade on Existing Hardware from SPOTS V12 or
V13 System and make sure step 6 was correctly done. After that return to step 9, Upgrade SPOTS
TPs.

 Before upgrading any tps, on the DS server only, if and only if the upgrade was done from a
Spots V12 System, as the spots user, execute script located in upgrade_db dir on
$SPOTS_PMS directory:

$ $SPOTS_DIR/upgrade_db/SG007649.ksh

 Do not install new SPOTS V14 TPs, only perform an upgrade for the TPs that exist
on the system.

 Proceed with the installation of the latest SPOTS V14 TPs that are located in the SPOTS V14
TPs distribution DVD (Technology Plug-Ins for Solaris) under the root “/” directory, by
performing an upgrade ONLY to the TPs that are already installed in the system.

84 E200613-01-115-V14.0I-34
 See Chapter 11 - Technology Plug-Ins (TPs).

10. Install New SPOTS TPs ( Both AS & DS )

This procedure must be repeated on every SPOTS PMS host to be installed


 Now, install ONLY new SPOTS V14 TPs, which were not initially on the system.

 Proceed with the installation of the latest SPOTS V14 TPs that are located in the SPOTS V14
TPs distribution DVD (Technology Plug-Ins for Solaris) under the root “/” directory, by
performing an installation ONLY for the new TPs that were not initially on the system.

 See Chapter 11 - Technology Plug-Ins (TPs).

 If Real-Time is available on the SPOTS V14 system, the Real-Time Agents need to be
upgraded in order to start automatically. Please refer to the SPOTS V14 User Manual, in
section “4.5.5 PDC Types window”, and in this window right-click on the agent type, for each
of the agent types available on the system, and select “Upgrade Agents of this Type”. ( Only
AS )

11. Reboot the System ( Both AS & DS )

 Execute, as root user, the following command:


# /etc/shutdown -y -g0 -i6

 Upgrade on Existing Hardware from V12 or V13 Systems completed.

E200613-01-115-V14.0I-34 85
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

3.2.3 Migrating to New Hardware from SPOTS V12 or V13 System

Migrating to New Hardware from V12 or V13


System

Backup User Parameters


Step 1

Import to V14 system via


Network-Link
Step 2

Restore User Parameters


Step 3 New V14 System

Upgrade SPOTS TPs Install Specific Hardware


Step 4 Step 1

Install Standard Software


Install New SPOTS TPs Step 2, Sub-Step A
Step 5

Install SPOTS Software


Install Virtual X Server on Solaris
Step 6 Step 3

Reboot the System Install SPOTS RTA on


Step 7 Solaris
Step 4

End
Install SPOTS PMC on
Windows
Step 5

Installing SPOTS Patches


Step 6

Figure 5, Upgrading from SPOTS V12/V13 System (Using New HW)

86 E200613-01-115-V14.0I-34
 If WebReports is installed in the system, please consult the
WebReports/WebPortal documentation for specific upgrade instructions
before start the SPOTS upgrade.

 If an upgrade is being made from SPOTS V13 mobile, at least the SPOTS TP’s
Version 69 must be installed.

Using as a guideline the above flowchart, follow the steps depicted below, to upgrade a SPOTS
V12 or V13 System to a SPOTS V14 System using new hardware.

The SPOTS V14 system will be installed in parallel, like is presented in the embebed flowchart
depicted in Figure 5 with the title “New V14 System”.

The steps to be followed for the SPOTS V14 system installation are the same as for the case of an
SPOTS V14 Initial Installation as is described in section 3.2.1 -
Initial Installation of a SPOTS V14 System, but only until Step 6. At this moment (Step 6) the
New V14 system is ready to receive the data from the SPOTS V12 or V13 system (Step 2 in the
“main” flowchart)

In the following steps there is an indication if they are to be executed in the hosts:

• Application Server ( Only AS )


• Database Server ( Only DS )
• Application Server and Database Server ( Both AS & DS )

If inside a step there is the need to refer an exception to the initial indication it will be indicated
explicitely.

If the system is a Single Server Environment then the Application Server host is the same as the
Database Server host.

1. Backup User Parameters ( Both AS & DS )

Execute all the steps described in in Section 3.2.8 order to backup all relevant user parameters.

2. Import to V14 system via Network-link ( Only DS )

 The remaining steps are to be executed in the SPOTS V14 System

 Login as root user in the SPOTS V14 System.

 Reboot the system by executing, as root user, the following command:


# /etc/shutdown -y -g0 -i6

E200613-01-115-V14.0I-34 87
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 The database partitioning is used to remove automatically the PM detailed data. If activated,
15 days of detailed data (with granularity bigger than 5 minutes), and 2 days of detailed data
(with 5 minutes granularity) are stored by default.
 If the period in the SPOTS V12 system was not the default, this setting must be
changed in the SPOTS V14 system prior to start importing the V12 data. Change the
properties NumberDaysInDetailPartition_86400 (for data with granularity bigger than 5
minutes) of the “sds.cfg” file (refer to Annex 3).

 When the changes to “sds.cfg” file are finished stop and start SPOTS as described in
Sections 4.1 and 4.2.

 Install patch p140120-* (where * is the latest version in the patch DVD).

 Stop all SPOTS services on both servers (see 4.1-Stopping SPOTS).

 Execute the script upgradeDiffHwtoV14 to configure the network-link between both


databases and start the import process to SPOTS V14 database. The script will ask
for the IP-Adress of the old SPOTS system, the name of the SPOTS database, the
Oracle listener port, the database password (omcadm) of the old database and the
database password (omcadm) of the new V14 database.
# . /etc/spotsenv
# $SPOTS_DIR/upgrade_db/upgradeDiffHWtoV14

 Remove the SPOTS Patches DVD.

3. Restore User Parameters ( Both AS & DS )

Restore the user parameters that were saved in Step 1. This procedure is applied on the installed
SPOTS Server that contains the SPOTS Database (DB Server) and also on the SPOTS Server that
contains the Application Server (AS Server).

 Perform the steps described in Section 3.2.9, with the exception of the steps described in
section 3.2.9.2.2 - Merge of virtual entities.

 Start all SPOTS services (see 4.2-Starting SPOTS ).

88 E200613-01-115-V14.0I-34
4. Upgrade SPOTS TPs ( Both AS & DS )

 Check that the package p140120-4 was installed, login as root and execute the following
command:

# pkginfo | grep -i p140120-4


application p140120-4 PM-SPOTS-V1401-patch-20-rel-4

If the above command returned the same output then the patch was installed.
If the patch was not installed no output will appear (output where the patch was not installed
bellow):

# pkginfo | grep -i p140120-4


#

If the patch was not installed go to Figure 5, Upgrading from SPOTS V12/V13 System (Using
New HW and make sure step 6 was correctly done. After that return to step 4, Upgrade SPOTS
TPs.

 Before upgrading any tps, on the DS server only, if and only if the upgrade was done from a
Spots V12 System, as the spots user execute script located in upgrade_db dir on
$SPOTS_PMS directory:

$ $SPOTS_DIR/upgrade_db/SG007649.ksh

The next procedure must be repeated on every SPOTS PMS host to be installed
 Do not install new SPOTS V14 TPs, only perform an upgrade for the TPs that exist
on the system (DS).

 Proceed with the installation of the latest SPOTS V14 TPs that are located in the SPOTS V14
TPs distribution DVD (Technology Plug-Ins for Solaris) under the root “/” directory, by
performing an upgrade ONLY to the TPs that are already installed in the DS system and
installing the same TPs for the AS system..

 See Chapter 11 - Technology Plug-Ins (TPs).

5. Install New SPOTS TPs ( Both AS & DS )

This procedure must be repeated on every SPOTS PMS host to be installed


 Now, install ONLY new SPOTS V14 TPs, which were not initially on the system
(AS or DS).

 Proceed with the installation of the latest SPOTS V14 TPs that are located in the SPOTS V14
TPs distribution DVD (Technology Plug-Ins for Solaris) under the root “/” directory, by
performing an installation ONLY for the new TPs that were not initially on the system.

 See Chapter 11 - Technology Plug-Ins (TPs).

E200613-01-115-V14.0I-34 89
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 After the installation of the New TPs has been concluded proceed now to section 3.2.9.2.2 -
Merge of virtual entities and execute the steps for merging the virtual entities. ( Only AS )

6. Install Virtual X Server ( Both AS & DS )


You must install the Virtual X Server:

 Login as root user and insert the SPOTS Performance Management V14.0 Core DVD

 Execute the following command to install the Virtual X Server:


# /cdrom/cdrom0/Xvfb/install

 Remove the SPOTS Performance Management V14.0 Core DVD with the commands:
# cd /
# eject cdrom

 Edit the $SPOTS_DIR/sas.cfg file and add the following line:


VirtualClientDisplay=:9

7. Reboot the System ( Both AS & DS )

 Execute, as root user, the following command:


# /etc/shutdown -y -g0 -i6

 Migrating to new hardware from V12 or V13 completed.

3.2.4 SPOTS V14 Software Upgrade

90 E200613-01-115-V14.0I-34
SPOTS V14 Software Upgrade

Installing SPOTS Patches


Step 1

Upgrading SPOTS TPs


[from “/” directory]
Step 2

Reboot the System


Step 3

End

Figure 6, SPOTS V14 Software Upgrade

1. Installing SPOTS patches ( Both AS & DS )


This procedure must be repeated on every SPOTS PMS host to be installed.

 Execute the steps in Section 4.1 "Stopping SPOTS".

 Check which SPOTS patches are released and obtain them.

 Install SPOTS patches following each patch specific instructions.

2. Upgrading SPOTS TPs ( Both AS & DS )


This procedure must be repeated on every SPOTS PMS host to be installed.

 Proceed with the installation of the latest SPOTS V14 TPs that are located in the
SPOTS V14 TPs distribution DVD (Technology Plug-Ins for Solaris) under the root
“/” directory, by performing an upgrade to the TPs that were installed previously in the
system.

 See Chapter 11 - Technology Plug-Ins (TPs)..

3. Reboot the system ( Both AS & DS )

 Execute, as root user, the following command:


# /etc/shutdown -y -g0 -i6

 V14 Software Upgrade Completed.

E200613-01-115-V14.0I-34 91
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

3.2.5 SPOTS V14 Hardware Upgrade

Backup User Parameters


(Both AS & DS) Upgrading Spots V14 Hardware:
Step 1
Small → Medium
Medium → Large
Provide External
Repository For V14
Export (Both AS & DS)
Step 2

Export V14 Data Reinstall V14 System


(Only DS) Step 4
Step 3

Installing Specific
Import V14 System Old Data Hardware, Sub-Step 1
(Only DS)
Step 5
Installing Standard Software
Sub-Step 2
Restore User Parameters
(Both AS & DS)
Step 6 Installing SPOTS Software on
Solaris, Sub-Step 3

Upgrade SPOTS TPs Installing SPOTS RTA on


(Only DS) Solaris, Sub-Step 4
Step 7

Installing SPOTS PMC on Windows


Sub-Step 5
Install New SPOTS TPs
(Both AS & DS)
Step 8
Installing SPOTS Patches
Sub-Step 6

Install Virtual X Server


(Both AS & DS)
Step 9

Reboot The System


(Both AS & DS)
Step 10

End

Figure 7, SPOTS V14 Hardware Upgrade

92 E200613-01-115-V14.0I-34
 If WebReports is installed in the system, please consult the WebReports
documentation for specific upgrade instructions before start the SPOTS upgrade.

Using as a guideline the above flowchart, follow the steps presented bellow, to upgrade SPOTS
V14 System from small to medium and medium to large using additional hardware.

The SPOTS V14 will not remain in operation since the Solaris Operating System will be reinstalled
as described in Figure 7, SPOTS V14 Hardware Upgrade.

The steps to be followed for the SPOTS V14 system installation are the same as for the case of a
SPOTS V14 Initial Installation. This is described in section 3.2.1 Initial Installation of a SPOTS
V14 System, but only until Sub-Step 6. At Sub-Step 6, the New V14 system is ready to receive the
exported data from the SPOTS V14 system (Step 5 in the flowchart above)

In the following steps there is an indication in which hosts they are to be executed:

• Application Server ( Only AS )


• Database Server ( Only DS )
• Application Server and Database Server ( Both AS & DS )

If inside a step there is the need to refer an exception to the initial indication it will be indicated
explicitly.

If the system is a Single Server Environment then the Application Server host is the same as the
Database Server host.

 In case the export to TAPE option is used a minimum of 5 tapes is needed for
completing the export operation.

3.2.5.1 Backup User Parameters ( Both AS & DS )

Execute all the steps described in Section 3.2.8 order to backup all relevant user parameters.

3.2.5.2 Provide External Repository for V14 Export ( Both AS & DS )

An external repository for saving the user parameters from the previous step and the exported data
of the SPOTS V14 system (next step) must be provided.

3.2.5.3 Export V14 Data ( Only DS )

 Operations to be executed in the SPOTS V14 System

 Login as root user in the SPOTS V14 System.

E200613-01-115-V14.0I-34 93
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Proceed to section 3.2.7 - Procedures for SPOTS Systems with BAR or


Autochanger Tape Device if the SPOTS V14 System has SPOTS-BAR installed
or if an Auto-changer Tape Device is installed. Return to this point afterwards.

 Execute the following commands to start the export of the SPOTS V14 Data:
# . /etc/spotsenv
# ksh $SPOTS_DIR/upgrade_db/export_db.sh

 When asked, enter the SPOTS database password for user “omcadm”.
Enter the SPOTS database password:

 The script will calculate the estimated dump file size and then ask which option to
save the dump files:

Calculating database size..


The estimated size of all database is: 34654 Mb

IMPORTANT NOTE:
When storing the export files to disk (UFS/NFS),
the compress rate is approx. 8-13% of total amount
of data. This means for example:

DATABASE DUMPFILE SIZE


-----------------------------
100GB 13GB
10GB 1GB
1GB 90MB

When you dump the export directly to tape,


no compress mechanism will be used.

Choose one of the following options:


1 DISK Export will be compressed, splitted and stored on
locally disks or partitions
2 NFS Export will be compressed locally and stored on NFS
mount point to remote server
3 TAPE Export will be dumped uncompressed to a tape device

Enter selection [?,??,q]:

94 E200613-01-115-V14.0I-34
Exporting to DISK

 If the dump files will be saved directly to disk then select option “1”.

 If you want to compress the data exported, accept the default option to the following
question, else type n.
Do you want to use a compress mechanism? (default: yes): [y,n,?,q]

 The script checks for partitions with sufficient disk space for the dump files, and ask
the user to select one.
Checking file systems
The script checks now for partitions with sufficient disk space for
the dump files

Please select
The following partitions have enough space to store the dump files

Please choose a file system:


1 /
2 /spots_db3
3 /spots_db2
4 /spots_db1
5 /spots_db5
6 /spots_db4
7 /spots_db6
8 /spots_rman
9 /opt
10 /var/opt

... 1 more menu choices to follow;


<RETURN> for more choices, <CTRL-D> to stop display:

 If none of the file systems has enough free space to store the dump files a message,
like the the one bellow, will be issued and the script will exit:
There is not enough space on the system to store the dump files!
You will need approx. 3001104 kbytes of free space.
The user will have to free some space of the file systems or save the dump files to a
tape device.

 Select the desired partition. The script will now begin the dump of the data.
The dump will be written to /var/export_dump/
. . .

E200613-01-115-V14.0I-34 95
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Export: Release 10.2.0.1.0 - Production on Fri Feb 3 10:13:30 2006

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release


10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Export done in AL32UTF8 character set and AL16UTF16 NCHAR character
set

About to export specified tables via Direct Path ...


. . .

 At the end of the script execution, the dump files must be copied to the external
repository that was provided in the previous step (Step 2)

Exporting to NFS share

 If the dump files will be saved to a NFS directory then select option “2”. The script
will give an estimate of the size of the database and ask if a compress mechanism
will be used. Accept the default option if you want to compress the data.

Calculating database size..


The estimated size of all database is: 57931 Mb

Choose one of the following options:


1 DISK Export will be stored on locally disks or partitions
2 NFS Export will be stored on NFS mount point to remote
server
3 TAPE Export will be dumped uncompressed to a tape device

Enter selection [?,??,q]:

Choose one of the following options:


1 DISK Export will be compressed, splitted and stored on
locally disks or partitions
2 NFS Export will be compressed locally and stored on NFS
mount point to remote server
3 TAPE Export will be dumped uncompressed to a tape device

Enter selection [?,??,q]: 2

96 E200613-01-115-V14.0I-34
 Now, it will warn the user about the requirements to export to a NFS share and ask
the path to it. Enter the path to the NFS share and the script will begin the dump of
the data. It was detected some issues between Oracle expdp and NFS shares, so,
pay attention to the output messages in order to ensure that all data is exported
sucssecfully. If you detect some problem, like an error message from Oracle,
with the code ORA-27054, unmount the NFS share and remount according
with the procedure described bellow.

Make sure that:


1. Destination host has shared a disk
2. Local host has write access to it

As root user you can configure it like this:


On destination host (desthost) run
#mkdir -m 777 dumpfiles
#share -F nfs -o rw /dumpfiles
On local host (localhost) run
#mkdir -p /mnt/dumpfiles
#mount -F nfs desthost:/dumpfiles /mnt/dumpfiles

--------------------
VERY IMPORTANT
--------------------
If you see error messages like the one bellow:
ORA-27054: NFS file system where the file is created or resides is
not mounted with correct options

You will have to mount the nfs share (in the localhost) with the
following commands:
# umount -f /mnt/dumpfiles
# mount -F nfs -o
rw,vers=3,bg,intr,timeo=600,wsize=32768,rsize=32768,hard
desthost:/dumpfiles /mnt/dumpfiles

Enter the absolut path to the NFS share:

Enter NFS local path: [?,q]

E200613-01-115-V14.0I-34 97
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 At the end of the script execution, the dump files must be copied to the external
repository that was provided in the previous step (Step 2)

Exporting to TAPE

 If a tape device will be used then select option “3”.


Choose one of the following options:
1 DBTYPE Each DB Type will be exported to a new tape
2 MINIMUM Export will try to use the minimun number of tapes
 If option “1” is selected at least 5 tapes will be needed

 The script will ask for the full path to the tape mount point, and then for the TAPE
volume size. Enter /dev/rmt/1n for the full path to the tape mount point.

Selected export: DBTYPE

Enter fullpath to the TAPE mount point (default /dev/rmt/0): [?,q]


/dev/rmt/1n

##########################################################
IMPORTANT NOTE:
Oracle will round VOLSIZE down,
please write down that value, it will be asked on import.
##########################################################
Enter TAPE (VOLSIZE) size in Mb (default: 24000): [?,q]

checkRTInstall
RT NOT Installed

Select DB to backup:
1 REF Contains REF_DATA
2 BASIC Contains BASIC_DATA
3 HIST Contains HIST_DATA
4 TRF Contains TRF_DATA

Enter selection [?,??,q]:

 The script will now begin the dump of the data.


Starting export at: segunda-feira 13 fevereiro 2006, 16:01:43 WET

98 E200613-01-115-V14.0I-34
Exporting REF DATA
Export: Release 10.2.0.1.0 - Production on Mon Feb 13 16:01:43 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release


10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
EXP-00074: rounding VOLSIZE down, new value is 25165767675
Export done in AL32UTF8 character set and AL16UTF16 NCHAR character
set
About to export specified tables via Conventional Path

 At the end of the export script the following message should appear:
Export terminated successfully without warnings.

 Repeat this procedure for every DB type


 If option “2” is selected

 The script will ask for the full path to the tape mount point, and then for the TAPE
volume size. Enter /dev/rmt/1n for the full path to the tape mount point.
Selected export: MINIMUM

Enter fullpath to the TAPE mount point (default /dev/rmt/0): [?,q]


/dev/rmt/1n

Enter TAPE (VOLSIZE) size in Mb (default: 24000): [?,q]

 The script will now begin the dump of the data.


Starting export at: segunda-feira 13 fevereiro 2006, 16:01:43 WET
Exporting REF DATA
Export: Release 10.2.0.1.0 - Production on Mon Feb 13 16:01:43 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release


10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Export done in AL32UTF8 character set and AL16UTF16 NCHAR character
set
About to export specified tables via Conventional Path

 The script will ask for a new tape when and if needed.

3.2.5.4 Reinstall Spots V14 System

E200613-01-115-V14.0I-34 99
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Follow the installation procedure depicted in Figure 7, SPOTS V14 Hardware Upgrade on howto
to re-install Spots V14 System (when doing the re-installation, the new system is configured
accordingly with the HW upgrade). Do this procedure following section 3.2.1 Initial Installation of a
SPOTS V14 System. Remember that you must stop in Step 6.

3.2.5.5 Import V14 system Old Data ( Only DS )

 The remaining steps are to be executed in the SPOTS V14 System

 Login as root user in the SPOTS V14 System.

 Reboot the system by executing, as root user, the following command:


# /etc/shutdown -y -g0 -i6

 The database partitioning is used to remove automatically the PM detailed data. If activated,
15 days of detailed data (with granularity bigger than 5 minutes) are stored by default.
 If the period in the SPOTS V14 system was not the default, this setting must be
changed in the new SPOTS V14 system prior to start importing the previous V14 system
data. Change the properties NumberDaysInDetailPartition_86400 (for data with granularity
bigger than 5 minutes) of the “sds.cfg” file (refer to Annex 3).

 When the changes to “sds.cfg” file are finished stop and start SPOTS as described in
Sections 4.1 and 4.2.

 Stop all SPOTS services (see 4.1-Stopping SPOTS).

 Copy the dump files from the external repository to a local directory in the SPOTS
V14 system or restore these files from the tapes in the case they were placed on
tape devices.

 Execute the following command to start the import of the old SPOTS V14 Data into
the new SPOTS V14 System:
# . /etc/spotsenv
# ksh $SPOTS_DIR/upgrade_db/import_db.sh

 This command will ask for the path to the external repository where the export files
were saved and for the password of the “omcadm” user.
Do you want to import from:
1 DISK Import from disks or partitions
2 TAPE Import from a tape device
Enter selection [?,??,q]:

Importing from DISK

 If the import is to be done from disk or partitions then select option “1”. This
command will ask for the path to the external repository where the export files were
saved and for the password of the “omcadm” user.

100 E200613-01-115-V14.0I-34
Creating Import script

Enter the Import directory path:


/opt/export_dump2

 The import of data starts. Notice that this process can take several hours. The time
for importing the data depends on the size of the database exported.

Starting import at: Tue Sep 15 15:43:32 WEST 2009

IMPORTING REF DATA .....

Import: Release 10.2.0.3.0 - 64bit Production on Tuesday, 15


September, 2009 15:43:32

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release


10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Master table "OMCADM"."SYS_IMPORT_FULL_02" successfully
loaded/unloaded
Starting "OMCADM"."SYS_IMPORT_FULL_02": omcadm/********
directory=spots_dpump_dir table_exists_action=append
dumpfile=REF_DATA.dmp LOGFILE=ref.log
Processing object type TABLE_EXPORT/TABLE/TABLE

Importing from TAPE

 If the import is to be done from tape then select option “2”. This command will ask
for the password of the “omcadm” user and for the tape mount point:
Enter the SPOTS database password:
spots2005

Preparing Counter List...

Creating Import script

 Choose the options according with the export method choosed.

E200613-01-115-V14.0I-34 101
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Choose one of the following options:


1 DBTYPE Each DB Type will be exported to a new tape
2 MINIMUM Export will try to use the minimun number of tapes

 If option “1” was selected then select thd DB type to import


Selected import: DBTYPE
checkRTInstall
RT NOT Installed

Select DB to backup:
1 REF Contains REF_DATA
2 BASIC Contains BASIC_DATA
3 HIST Contains HIST_DATA
4 TRF Contains TRF_DATA
5 DONE All imports were done go to next step

Enter selection [?,??,q]:1

Enter fullpath to the TAPE mount point (default /dev/rmt/0):


[?,q] /dev/rmt/1n
 Insert the value returned on export
Enter VOLSIZE from export:
25165767675

Starting Import
Starting import at: segunda-feira 13 fevereiro 2006, 16:10:39 WET
Importing REF DATA

Import: Release 10.2.0.1.0 - Production on Mon Feb 13 16:10:39 2006


Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release


10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V11.02.01 via conventional path

.
.

102 E200613-01-115-V14.0I-34
.
Altering sequences, please wait...
done.

 Repeat this procedure for every DB type


 If option “2” is selected

 The script will ask for the full path to the tape mount point, and then for the TAPE
volume size.
Selected import: MINIMUM

Enter fullpath to the TAPE mount point (default /dev/rmt/0): [?,q]


/dev/rmt/1n

 The script will now start the import


Starting Import

Starting import at: segunda-feira 13 fevereiro 2006, 16:14:46 WET

IMPORTING REF DATA .....

Import: Release 10.2.0.1.0 - Production on Mon Feb 13 16:14:46 2006

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release


10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V11.02.01 via conventional path


.
.
.

Altering sequences, please wait...


done.

3.2.5.6 Restore User Parameters ( Both AS & DS )

E200613-01-115-V14.0I-34 103
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Restore the user parameters that were saved in Step 1. This procedure is applied on the installed
SPOTS Server that contains the SPOTS Database (DB Server) and also on the SPOTS Server that
contains the Application Server (AS Server).

 Perform the steps described in Section Error! Reference source not found., with the
exception of the steps described in section 3.2.9.2.2 - Merge of virtual entities.

 Start all SPOTS services (see 4.2-Starting SPOTS ).

3.2.5.7 Upgrade SPOTS TPs ( Only DS )

This procedure must be repeated on every SPOTS PMS host to be installed


 Do not install new SPOTS V14 TPs, only perform an upgrade for the TPs that exist on the
system.

 Proceed with the installation of the latest SPOTS V14 TPs that are located in the SPOTS V14
TPs distribution DVD (Technology Plug-Ins for Solaris) under the root “/” directory, by
performing an upgrade ONLY to the TPs that are already installed in the system.

 See Chapter 11 - Technology Plug-Ins (TPs).

3.2.5.8 Install New SPOTS TPs ( Both AS & DS )

This procedure must be repeated on every SPOTS PMS host to be installed


 Now, install ONLY new SPOTS V14 TPs, which were not initially on the system.

 Proceed with the installation of the latest SPOTS V14 TPs that are located in the SPOTS V14
TPs distribution DVD (Technology Plug-Ins for Solaris) under the root “/” directory, by
performing an installation ONLY for the new TPs that were not initially on the system.

 See Chapter 11 - Technology Plug-Ins (TPs).

 After the installation of the New TPs has been concluded proceed now to section 3.2.9.2.2 -
Merge of virtual entities and execute the steps for merging the virtual entities. ( Only AS )

3.2.5.9 Install Virtual X Server ( Both AS & DS )

You must install the Virtual X Server:

 Login as root user and insert the SPOTS Performance Management V14.0 Core-Drop
1 DVD

 Execute the following command to install the Virtual X Server:


# /cdrom/cdrom0/Xvfb/install

 Remove the SPOTS Performance Management V14.0 Core-Drop 1 DVD with the
commands:

104 E200613-01-115-V14.0I-34
# cd /
# eject cdrom

 Edit the $SPOTS_DIR/sas.cfg file and add the following line:


VirtualClientDisplay=:9

3.2.5.10 Reboot the System ( Both AS & DS )

 Execute, as root user, the following command:


# /etc/shutdown -y -g0 -i6

 Upgrade hardware on Spots V14 System using additional New Hardware is complete.

E200613-01-115-V14.0I-34 105
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

3.2.6 Installation of Oracle Instant Client in Application Server (AS) machine


In a Distributed installation it is necessary to have Oracle Instant Client components installed in the
Application Server (AS) machine. Proceed with the following steps:

 Login as root user.

 To install Oracle Instant Client, please insert the “Oracle Installation Packages”
Media in the DVD drive and run the following commands:
# cd /cdrom/cdrom0/10.2.0.1_CL
# ./install.sh

 Now, return and proceed with the remaining steps in the upgrade/installation
procedure.

106 E200613-01-115-V14.0I-34
3.2.7 Procedures for SPOTS Systems with BAR or Autochanger Tape Device
In order to correctly upgrade the system you must follow the following procedures depending if
SPOTS BAR is installed and/or a Autochanger Tape Device is also part of the system.

3.2.7.1 System with SPOTS-BAR and Autochanger Tape Device

 First, the SPOTS Database must be set in NOARCHIVELOG mode.

 To do so, login as root user and execute the following commands:


# su - oracle
$ export ORACLE_SID=spot
$ sqlplus "/AS SYSDBA"
SQL> shutdown immediate
SQL> startup mount
SQL> alter database NOARCHIVELOG
SQL> alter database open
SQL> quit
$ exit

 Second, the Legato automatic backup must be disabled.

 Please refer to SPOTS-BAR Installation and User Manual of the corresponding


SPOTS version ( V12 or V13) for more specific details in this procedure.

 To do so, login as root user and execute the following steps.

 Start the legato admin console as root:


# nwadmin -s <servername>

 Disable all active Spots Groups.

 Reset the autochanger by entering the following command


# nsrjb -vHE

 Eject all tapes from autochanger.

 Shutdown legato software


# nsr_shutdown

 Put a single tape in the autochanger.

E200613-01-115-V14.0I-34 107
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Third, return and proceed with the remaining steps in the upgrade procedure.

3.2.7.2 System only with an Autochanger Tape Device

 Eject all tapes from autochanger

 Put a single tape in the autochanger.

 Now, return and proceed with the remaining steps in the upgrade procedure.

108 E200613-01-115-V14.0I-34
3.2.8 Backup user parameters from an existing SPOTS system
In order to correctly upgrade the system you must carefully archive the relevant existing user
parameters:
• Save a copy of user-defined configuration parameters and files (see Section 3.2.8.1).

3.2.8.1 Backup user-defined configuration parameters and files

These files and parameters described below need to be saved for future inclusion in the new
SPOTS V14 installation.
Section 3.2.8.3 lists files related with the basic SPOTS Long-Term functionality. These files always
need to be saved.
Sections 3.2.8.4 and 3.2.8.5 are concerned with real-time related configuration parameters
(respectively files) and thus they are only relevant in case real-time processing applies to the
SPOTS installation whose data is being migrated.

3.2.8.2 Gather information of existing SPOTS System Users

Collect all the relevant information about the users in the existing (old) SPOTS system, e.g.,
usernames and passwords, in order to create them later in the new SPOTS system.

3.2.8.3 Long-Term Files

Relative file paths are located under the SPOTS base installation directory.

File Purpose
Files existing on any SPOTS server
/etc/spotsenv This file holds the SPOTS run-time environment.
The ‘old’ user-defined environment variables
must be reconfigured.
Files existing on SPOTS servers with spotsAS package installed
$SPOTS_DIR/public/custom/reports/ This file carries the Nokia Siemens Networks
virtual_entities.dat pre-defined virtual entities. The ‘old’ user-
defined public virtual entities must be added
onto the new SPOTS installation by executing a
script that will perform a merge of the virtual
entities.
domain.cfg Default is only ‘root’ domain defined. Use ‘old’
domains configuration file.
nodes_creation.cfg The ‘old’ user-defined automatic nodes creation
entries must be added onto the new SPOTS
installation.
sas.cfg The ‘old’ user-defined properties must be added
onto the new SPOTS installation.
users.cfg No users are defined by default. Use ‘old’ users
configuration file.
egw.cfg The ‘old’ user-defined properties must be added
onto the new SPOTS installation.

E200613-01-115-V14.0I-34 109
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

mail.cfg The ‘old’ user-defined properties must be added


onto the new SPOTS installation.
RC configuration files. These are the RC configuration files that should
Inside $SPOTS_DIR/data: be saved to assist in the configuration of the
new Spots V14 system.
• oltaccess.cfg

• gsnl.dat

• element_managers.cfg

Files existing on SPOTS servers with spotsDS package installed


sds.cfg The ‘old’ user-defined properties must be added.
Files existing on SPOTS servers with spotsNS package installed
sns.cfg The ‘old’ user-defined properties must be added.

Additionally to the files mentioned in the above table, save the following data:
• User Private Data: Save all files located under $SPOTS_DIR/users
• User Scheduled Tasks: Save all files located under $SPOTS_DIR/scheduler

3.2.8.4 Real-Time Configuration Parameters

This section lists some Real-Time specific configurations that should be saved from the SPOTS
Installation being migrated, in order to ease the task of reconfiguring them again on the new V14
SPOTS Installation.

1. Real-Time Agency default memory:


If the value of the maximum memory available for any RT Agency has been changed from the
default, write down the changes made in the file

<%SPOTS-RTAgency%>\james\profiles\service.properties

Refer to Section 9.7.1 - Edit the $SPOTS_DIR/server_rt/properties/MonitorServer.properties


file and insert the IP address for the database server:

database.hostname = <IP of the database server>

Configuring SPOTS RT Agency Software (Solaris environment) for details on


these parameters.

3.2.8.5 Real-Time Configuration Files

The following lists depend on whether the RT Agencies are located in a Solaris environment or in
a Windows environment.
For Windows, relative paths are located under the SPOTS Agency base installation directory.

110 E200613-01-115-V14.0I-34
File Purpose
RT Agencies in Solaris
$SPOTS_DATA/traffic_data/ These files are created manually by the user
<data_type>/cfg/real_time.cfg when the Agency is installed in Solaris (refer to
Section 9.7.2.1 Configuring real_time.cfg files).

RT Agencies in Windows
\FileTransfer\cfg\AgentDirectories.cfg This file holds information about the name
of the files that are processed by each
Agent/Agency.
\FileTransfer\cfg\FileTransfer.cfg This file holds information about the
parameters for the FileTransfer application.
\FileTransfer\cfg\PDCData.cfg This file holds information about the
identification of the files generated for each
type of data.

E200613-01-115-V14.0I-34 111
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

3.2.9 Restore user parameters on the newly installed SPOTS system


In order to correctly upgrade the system you must carefully restore the relevant existing data.

 The operation of restoring the previously saved copy of user-defined configuration


parameters and files shall be executed later on in the upgrade procedure.
Thus, after executing the steps below do not proced to the next section, instead, return to the
list of steps of section 3.2 for the corresponding type of upgrade.

3.2.9.1 Create Old SPOTS System Users in the New SPOTS System

Use the information gathered in section 3.2.8.2 and create the old SPOTS system users in the New
SPOTS system.

3.2.9.2 Restoring user-defined configuration parameters and files

Restore the previously saved copy of user-defined configuration parameters and files (see Section
3.2.8.1 for details on the saved parameters and files):

3.2.9.2.1 Merge of generic long term files

 Merge the long-term files saved (section 3.2.8.3), with the new files installed by the new
SPOTS packages, with the following exceptions
 If upgrading on Existing HW (0) then proceed with the merge and execute the
following steps for the virtual entities file:

 Copy the virtual entities file previously saved to the directory where the new
virtual entities file is stored ($SPOTS_DIR/public/custom/reports) and overwrite
it.
 If upgrading to New HW (3.2.3) then proceed with the merge except for the virtual
entities file that will be done in the next steps.

3.2.9.2.2 Merge of virtual entities

 Execute the following steps only after the step “Install New SPOTS TPs” that is
described in the upgrades flowchart, has been concluded.

 For the merge of the virtual entities file a script must be executed.

 Login as root user

 Execute the command:


# . /etc/spotsenv

 Make a backup of the original virtual entities file, for example:

112 E200613-01-115-V14.0I-34
# cd $SPOTS_DIR/public/custom/reports
# cp –p virtual_entities.dat virtual_entities.dat.original
# cd /

 Execute the following script:


# $SPOTS_DIR/bin/merge_ve.sh

 The script will ask for the path where the “old” virtual entities file from V12 or V13 System is
located.
# Enter path to V12 or V13 VE's: [?,q]

 Write the path and press enter. The script will confirm that the file exists on the given path
and will ask for the path of the current virtual entities file for the V14 System.
# Enter path to V12 or V13 VE's: [?,q] /export/home/spots/V12_VEs
/export/home/spots/V12_VEs/virtual_entities.dat exists.

Enter path to V14 VE's: [?,q]

 Write the path ($SPOTS_DIR/public/custom/reports) and press enter. The script will start
merging the files.
# Enter path to store final VE's file (default /opt/spots-pms/spots-
pms/public/custom/reports): [?,q]

Perl installed!

Creating temp file....


temp file created

MERGE STARTED: quinta-feira 16 fevereiro 2006, 11:58:14 WET

CREATING NEW FILE...


File created

MERGE FINISHED: quinta-feira 16 fevereiro 2006, 12:00:49 WET

 In the previous provided directory, the output file (new merged file) will be written with its
original name (virtual_entities.dat) and the previous file (not merged) will be written with a
date extension, e.g.:
virtual_entities.dat
virtual_entities.dat.20060216121604

 To verify that the virtual entities were correctely merged and that the file was correctly
processed use the SPOTS Client to open the virtual entities and manipulate them (please

E200613-01-115-V14.0I-34 113
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

refer to the SPOTS V14 User Manual for further details in this operation), e.g., create a new
virtual entity.

 Merge of virtual entities completed.

3.2.9.2.3 Restore of user tasks

 Login as spots user.

 Stop all spots services as described in in Section 4.1 "Stopping SPOTS".

 Copy to the $SPOTS_DIR/users directory the content of the users directory from the
previous V12 or V13 System.

3.2.9.2.4 Merge of generic real time files

 Merge the real-time files and parameters saved in the beginning of this procedure (sections
3.2.8.4 and 3.2.8.5), with the new files installed by the new SPOTS packages.
Additionally, when upgrading SPOTS V12 to SPOTS V14, the following procedure is required to
make visible in the SPOTS GUI the public and private reports created in V12.
For each report to be reused from V12, perform the following steps (for details on the used
functionality see the SPOTS User Manual [1]):

 Edit the report with Spots Reports Editor.

 Compile the report.

 Execute "Add Report to server" and press ok button after filling all necessary fields.

 Private reports must be added to the server by the user that created them.

114 E200613-01-115-V14.0I-34
4 Starting and stopping SPOTS

This section contains instructions to stop and start only the SPOTS services and associated jobs.
For instructions on how to start and stop the SPOTS Client (User Interface) applications, consult
[1].

4.1 Stopping SPOTS


Execute the following steps to stop the SPOTS services and the associated jobs:
 Ensure that all SPOTS users have exited from all SPOTS applications including all SPOTS
Client sessions.

The following procedure can be used to stop all applicable services with a single command:

 Login as root user.


 Consult TP documentation (see section 11.1 - Documentation) to know how to stop
TP-specific services and the associated jobs, before continuing with the next step.

 The command initSpots affects ALL spots services that include: the Long Term (LT),
Real Time (RT), SAA watchdog and Add-on services.
 There is also the possibility to use the command initSpotsPMS that affects ONLY the
spots services that include: the Long Term (LT), Real Time (RT) and Add-on services.

 Issue the following command:


# /etc/init.d/initSpots stop

 This command attempts to stop all the above indicated services. It might happen that some
of the services/processes do not stop; in such a case, proceed as follows:

 Wait about a minute and then verify if the PMS processes are stopped, executing the
following command:
# ps –ef | grep s[nad]s

 Verify also if the RTS processes are stopped, executing the following command:
# ps –ef | grep rtmonitor
# ps –ef | grep rtapm
# ps –ef | grep rtmanager

 Verify also if the SAA process is stopped, executing the following command:
# ps –ef | grep snmpdm
# ps –ef | grep saawd

 If the processes are already stopped, no action is required; otherwise force them to stop,
with:
# /etc/init.d/initSpots stop -force

E200613-01-115-V14.0I-34 115
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 If you have a RT Agency (RTA) installed in the machine you should continue with the next
steps.

 Issue the following command:


# /etc/init.d/initSpotsAgency stop

 This command attempts to stop the RT Agency service. It might happen that the
service/process do not stop; in such a case, proceed as follows:

 Wait about a minute and then verify if the RT Agency process is stopped, executing the
following command:
# ps –ef | grep rtagency

 If the process is already stopped, no action is required; otherwise force it to stop, with:
# /etc/init.d/initSpotsAgency stop -force

4.1.1 Stopping SPOTS LT services only

The following procedure can be used to stop only long-term services:

 Login as root user.

 Stop all PMS Long-Term services:


# /etc/init.d/initSpotsLT stop

 This command attempts to stop all the PMS servers (SNS, SAS and SDS) that are installed
in this host. It might happen that some of the services/processes do not stop; in such a case,
proceed as follows:

 Wait about a minute few seconds and then verify if the PMS processes are
stopped, executing the following command:
# ps –ef | grep s[nad]s

 If the processes are already stopped, no action is required; otherwise force them to
stop, with:
# /etc/init.d/initSpotsLT stop -force

4.1.2 Stopping SPOTS RT services only

The following procedure can be used to stop only real-time services:

 Login as root user

 Stop all PMS Real-Time services:


# /etc/init.d/initSpotsRT stop

116 E200613-01-115-V14.0I-34
 This command attempts to stop all the PMS Real-Time services (RTS) that may be running
on this host. It might happen that some of the services/processes do not stop; in such a case,
proceed as follows:

 Wait about a minute and then verify if the RTS daemons are stopped, executing the
following command:
# ps –ef | grep rtmonitor
# ps –ef | grep rtapm
# ps –ef | grep rtmanager

 If the processes are already stopped, no action is required; otherwise force them to
stop, with:
# /etc/init.d/initSpotsRT stop -force
 If you have a RT Agency (RTA) installed in the machine you should continue with the next
steps.

 Issue the following command:


# /etc/init.d/initSpotsAgency stop

 This command attempts to stop all the RT Agency service. It might happen that some the
service/process do not stop; in such a case, proceed as follows:

 Wait about a minute and then verify if the RT Agency process is stopped, executing
the following command:
# ps –ef | grep rtagency

 If the process is already stopped, no action is required; otherwise force it to stop,


with:
# /etc/init.d/initSpotsAgency stop -force

 SPOTS RT stopped

4.1.3 Stopping SPOTS add-ons services only

The following procedure can be used to stop only SPOTS add-ons services:

 Login as root user

 SPOTS add-ons services can be stopped separated or all together, to stop all services do:
# /etc/init.d/initSpotsAdminServices stop

 This command attempts to stop SPOTS add-ons services. It might happen that some of the
services/processes do not stop, in such case, proceed as follows:

E200613-01-115-V14.0I-34 117
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Wait about a minute and then verify if the all services are stopped, executing the
command again:
# /etc/init.d/initSpotsAdminServices stop

 If no messages is outuped then all services were stoped and no action is required,
otherwise force them to stop, with:
# /etc/init.d/initSpotsAdminServices stop -force

To stop services separately.

 To stop Active Warnings Proxy execute:


# /etc/init.d/initSpotsActiveWarningsProxy stop

 To stop System Monitor execute:


# /etc/init.d/initSpotsSystemMonitor stop

 To stop Administration Console execute:


# /etc/init.d/initSpotsAdminConsole stop

 SPOTS add-ons Services stopped

118 E200613-01-115-V14.0I-34
4.2 Starting SPOTS
Execute the following steps to start the SPOTS services and the associated jobs:
 The “start” argument is optional in the commands below i.e. can be omitted with the same
results on behalf of simplicity.
The following procedure can be used to start all applicable services with a single command (see
below for a method that uses a specific command for each service):

 Login as root user.


 The command initSpots affects ALL spots services that include: the Long Term (LT), Real
Time (RT), SAA watchdog and Add-on services.
 There is also the possibility to use the command initSpotsPMS that affects ONLY the spots
services that include: the Long Term (LT), Real Time (RT) and Add-on services.
 There are other options when using initSpots, initSpotsPMS, initSpotsLT and initSpotsRT.
Usage: initSpots [start [-log]] | [stop [-force|-quit]]
By default, init logs are not created, but they can be with the –log option.
The -quit option is useful for debugging purposes.

 Before proceeding with the next step, verify that all the SPOTS services are stopped:
 Execute the following command:
# ps –ef | grep s[nad]s

 If the processes are already stopped, no action is required; otherwise force them to
stop, with:
# /etc/init.d/initSpots stop -force

 Give the following command:


# /etc/init.d/initSpots start

 If you have a RT Agency (RTA) installed in the machine you should continue with the next
steps.

 Issue the following command:


# /etc/init.d/initSpotsAgency start
 Additionally, consult TP documentation (see section 11.1 - Documentation) to know how to
start TP-specific services and the associated jobs.

E200613-01-115-V14.0I-34 119
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

4.2.1 Starting SPOTS Long-Term services only

The following procedure can be used to start only the Long-Term services:

 Login as root user

 Start all PMS Long-Term services (SNS, SAS and SDS):


# /etc/init.d/initSpotsLT start

4.2.2 Starting SPOTS Real-Time services only

The following procedure can be used to start only the Real-Time services:

 Login as root user

 Before proceeding with the next step, verify that all the SPOTS services are stopped:
 Execute the following command:
# ps –ef | grep s[nad]s

 If the processes are already stopped, no action is required; otherwise force them to
stop, with:
# /etc/init.d/initSpotsPMS stop -force

 Start all PMS Real-Time services (RTS and RTA) and SAA services:
# /etc/init.d/initSpotsRT start

 If you have a RT Agency (RTA) installed in the machine you should continue with the next
steps.

 Issue the following command:


# /etc/init.d/initSpotsAgency start

 Now it’s possible to start the SPOTS PMC application.

4.2.3 Starting SPOTS add-ons services only

 Before start SPOTS Administration Console, be sure that ports 8005, 8009 and 8080 are not
in use by any other process. To do so execute the next steps.

 Search if the ports are been used:

120 E200613-01-115-V14.0I-34
# netstat –an | grep 8005
# netstat –an | grep 8009
# netstat –an | grep 8080

 If no messages are returned from all commands then, everything is ok and


Administration Console can be started.

 If message like “127.0.0.1.8005 *.* 0 0 49152 0 LISTEN” is returned then you


need to stop the processes that are using that port or change the default ports for
Administration Console, to do this execute:
# . /etc/spotsenv
# cd $SPOTS_ADM_DIR/conf

 Edit server.xml file, find the line containing that port number “… port=”8005” …” and
replace that value with a value higher than 9000 not in use:

SPOTS add-ons services can be started separated or all together.

 Login as root user

 To start all SPOTS add-ons services do:


# /etc/init.d/initSpotsAdminServices start

To start services separately.

 To start Active Warnings Proxy execute:


# /etc/init.d/initSpotsActiveWarningsProxy start

 To start System Monitor execute:


# /etc/init.d/initSpotsSystemMonitor start

 To start Administration Console execute:


# /etc/init.d/initSpotsAdminConsole start

 SPOTS add-ons Services start.

E200613-01-115-V14.0I-34 121
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

5 Installing SUN Solaris 10

This chapter describes how to install the Solaris 10 10/08 Operating System, which is required by
all SPOTS PMS components and by the SPOTS Client for Solaris. SPOTS Client for Windows
2003/XP is also available; however the description of how to install Microsoft Windows 2003/XP
Operating System shall not be included.

 The following description is DVD oriented.

 If a graphics accelerator (e.g. XVR-500, XVR-100 and XVR-1000) is installed in your system
and you are trying to install Solaris 10, it is not possible to launch the graphical user
interface for the installation. That’s why you have to configure and install Solaris via a text
based user interface. The dialogs in this text based installation procedure are the same as in
the GUI installation. Just follow the instructions on the screen and enter the required
information as described in the current section (for installation).

 If you are installing Spots on a Sun SPARC M3000/ M4000 Enterprise server, it might
be necessary to setup the XSCF to access the machines Console (ok prompt). In order to do
this, go to Annex 14, and return here when completed.

 If you are installing a server with external storage make sure the external storage array
is connected to the host and powered on. This should be done accordingly with chapter 7,
SPOTS Configurations with External Storage.

 For installing Solaris 10 via a server management port you can use in Microsoft Windows
2003/XP Operating System the application HyperTerminal (in Start->Programs-
>Accessories->Communications->HyperTerminal) and configure it in the following way:

 Start the application and in the “Connection Description” window enter a name and
choose an icon for the connection and click OK

 In the “Connect To” window specify the type of connection to use, let’s assume you will
use, for example, COM1, and click OK

 In the COM1 properties window select the properties values valid for your connection
and when finished click OK

 In the HyperTerminal main window save the connection for future use (File->Save
as…) and for connecting use the “Connect” menu
In order to start the installation procedure for Solaris 10 Operating System, verify that the
necessary information is gathered, refer to Section 3.1.4. Then, execute the actions below:
 IMPORTANT NOTE: If you will install Fault Tolerance with disk mirroring do not forget to fill
the necessary information, refer to the “Information to fill before OS installation” on Annex 5.

 Switch on your system and verify that all external devices are properly connected. While the
Operating System is booting, enter the system monitor prompt, pressing the keys:
“Stop” + “A”

122 E200613-01-115-V14.0I-34
 If you are using tip then you will need to press the following keys
“~” + “#”

 If you are using Hyper Terminal then you will need to press the following keys
“Ctrl” + “Break”

 If you will install Fault Tolerance with disk mirroring or the last system installation included it
then execute the following two steps at the ok prompt

 Take note of the booting device, on old boot-device field of Previous OS Installation
Boot Device table at Annex 5, issuing the following command:
ok printenv boot-device
 This value can be used to recover from OS installation before any disk change.

 Change the booting device issuing the following command


ok setenv boot-device disk

 Insert the Solaris 10 10/08 Software DVD and enter the following command:
ok boot cdrom

 Select English as the Solaris Installer language:


“0” for English.

 Select the terminal type “DEC VT100”:


“3” for DEC VT100

 Press “F2” to continue

 Select Network Interface (if applicable), for example:


“bge0”

 Use Dynamic Host Configuration Protocol (DHCP), if asked:


“No”

 Specify the hostname for the machine.

 Enter the Internet Protocol (IP) Address for this system, for example:
“129.200.9.1”

 System part of a subnet:


“Yes”

 Specify the Netmask of your subnet, or accept the default value, for example:
“255.255.255.128”
 Do not accept the default Netmask unless you are sure it is correct for your subnet.

 Enable IPv6:
“No”

E200613-01-115-V14.0I-34 123
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Set the Default Route. If you know the IP address to the default gateway select “Specify one”,
otherwise select “Detect one upon reboot”
o Input the Default Router IP Address, if you have chosen to specify

 Confirm the information by pressing “F2” to continue or “F4” to change the information.

 Configure Kerberos Security:


“No”

 Confirm the information by pressing “F2” to continue or “F4” to review the information.

 Select the Name Service that will be used by this system:


“None”

 Confirm the information by pressing “F2” to continue or “F4” to review the information.

 Choose “Use the NFSv4 domain derived by the system”

 Confirm the information by pressing “F2” to continue or “F4” to review the information.

 Select the Time Zone


o Select the Continent, then the Country, select specific Region if prompted and confirm
the Date values

 Confirm the information by pressing “F2” to continue or “F4” to review the information.

 Type the alphanumeric string to be used as root password and confirm it (press Enter after
typing the password in each field).

 Confirm the information by pressing “F2” to continue or “F4” to review the information.

 Select “Yes” for the Remote Services.

 System identification is complete.

 Select Type of Install:


“Standard”

 Select to automatically eject the DVD:

“Automatically eject CD/DVD”

 Reboot After Installation:

“Auto Reboot”

 Solaris Interactive Installation:


“Initial”

 Read and Accept License to continue installation:

124 E200613-01-115-V14.0I-34
“Accept License”

 Select Geographic Regions, for which support should be installed. Example:

“Southern Europe” > “Portugal”

 Select System Locale:

Accept the default POSIX C by pressing "F2"

 Additional Products:
“None”

 Choose Filesystem Type:


“UFS”

 Select Software:
“Entire Distribution ”

 Select the disks according to the definitions of section 2.4-Hard Disk Partitioning.

 Select all available disks to lay out the file systems on.

 Preserve existing data (only if any of the selected disks has file systems or unnamed slices
that you can choose to preserve):
“Continue”

 Automatic Layout of File Systems:


“Manual Layout”

 Define the layout of all existing disks, following the SPOTS recommendations – see Section
2.4. Specify all the corresponding file systems and define their size (in MB).
“Customize”
 Take into consideration the intended SPOTS configuration that will be installed in the system.
Different SPOTS configurations imply different hard-disk requirements and different file
systems organisation.
 If Fault tolerance with disk mirroring will be installed then it is necessary to fill the “Information
to fill during OS installation” on Annex 5.
 Verify the information presented in the installation summary. If information is correct, proceed
to installation; otherwise, correct it.

 The actual installation time depends on the software you chose to install, the reallocation of
any space if needed and the drive performance.

 Mount Remote File Systems.


“Continue”

 Profile. Check the displayed information.


• “Begin Installation”.

E200613-01-115-V14.0I-34 125
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 If the Warning window does not report any error.


• Press: OK to continue
 The Solaris 10 Installation begins. It may take around 1 hour to finish.
“CD/DVD”

 At the end of the installation you can choose “View Log” to view the installation log or “Done”
to continue.

 The Solaris 10 Software DVD is ejected.

 System reboot is automatically initiated.

 If prompted about a warning regarding NFS version 4 choose “no”.

 If asked for a keyboard layout, choose “Portuguese”.

 Login as user root and run the following command once:


# svcadm enable smserver

 Installation of the Solaris 10 completed.

126 E200613-01-115-V14.0I-34
5.1 Installing System Patches
The system patches are recommended to be installed on top of the operating system.
SPOTS software is certified to be used only with the version of Solaris 10 Recommended Patch
Cluster that you can obtain through local NSN CARE support.

Install the hotfix SG008873_NN present in the delivery tool, and follow the indications on the
correspondent release notes.
Note: "NN" should be the latest version of the specific hotfix.

 Installation of the system patches completed.

E200613-01-115-V14.0I-34 127
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

6 Fault Tolerance with Disk Mirroring

 IMPORTANT NOTE: This feature cannot be uninstalled.

 IMPORTANT NOTE: you must proceed with the steps described in this chapter before
doing any steps that are described in chapter 7-SPOTS Configurations with (only if
the installation type is medium or large).

To improve SPOTS availability, Fault Tolerance with Disk Mirroring can be used. This chapter
describes how to install and maintain Fault Tolerance with Disk Mirroring.
If disk mirroring will not be installed, proceed with Oracle Software Installation on Chapter 8.
Fault Tolerance with Disk Mirroring consists on having the contents of one disk replicated on other
disk. Disk mirroring improves data availability. If one of the mirrored disks fails, the information can
be accessed on the other disk. Solaris Volume Manager Software is used to manage the mirroring.

Maintenance Tasks
After the setup tasks have been executed, the mirroring status is checked periodically by the
system. If any mirroring failure is detected, the root user will receive a failure notification via email.
Alternatively, the mirroring status can also be checked on user request. After failure detection it is
necessary to replace the damaged disks.
The maintenance tasks are:
• The mirror state monitoring as described in Section 6.2.1
• The replacement of damaged disks as described in Section 6.2.2
• The system boot with insufficient database replicas as described in Section 6.2.3

 Remove the SPOTS Performance Management V14.0 DVD.

 Insert the SPOTS Patches DVD.

 Install patch p140101-* (where * is the latest release version in the patch DVD).

 Remove the SPOTS Patches DVD and insert the SPOTS Performance Management V14.0
DVD.

128 E200613-01-115-V14.0I-34
Configuring Disk Mirroring

This section describes how to configure the disk mirroring for internal disks, since disk mirroring for
configurations that uses the Sun StorEdge 3320 or StorageTek ST2540 is done with hardware.
There are 2 pre-defined mirroring configurations:
1. Sun Fire V490 with two internal disks of 146 GB.
2. Sun Fire V445 with eight internal disks of 73 GB.
3. Sun Fire V440 with four internal disks of 73 GB except on Legacy Small B2.
4. Sun SPARC Enterprise M3000 with four internal disks of 146 GB.
5. Sun SPARC Enterprise M4000 with four internal disks of 146 GB.

The mirroring configuration for all configurations is described in Section 6.1.1.1.

6.1.1.1 Configuring System Disk Mirroring

This Section describes how to configure the System disk mirroring for:
• Sun SPARC Enterprise M3000
• Sun SPARC Enterprise M4000
• Sun Fire V490
• Sun Fire V445
• Sun Fire V440

 Login as root user


 This login must be done in the workstation and must be a root login (do not become super
user with the su command). In order to configure the Disk Mirroring it is best to send the host
to single user level.

 Issue the following command


# init 0
# boot -s
# <Enter the root password>
# svcadm enable -rst smserver
# vold &

 Detect and stop processes, with procedure in Section 6.2.7, that are using the following file
systems:
/replica1
/replica2
/replica3 (if applies)
/replica4 (if applies)
/replica5 (if applies)
/replica6 (if applies)

E200613-01-115-V14.0I-34 129
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

/replica7 (if applies)


/replica8 (if applies)
/var/opt
/var_opt_mirror
/export/home
/home_mirror
/opt_mirror
/swap_mirror
/root_mirror
/spots_rman (if applies)
/spots_rman_mirror (if applies)

 Insert SPOTS Performance Management V14.0 Core DVD in the system’s DVD unit.

 Execute the following shell commands:


# cd /var/diskman/OSandBRmirror
# ./runstep 1
# reboot

 Wait for the system to reboot.

 Login as root user

 Insert SPOTS Performance Management V14.0 Core DVD in the system’s DVD unit.

 Execute the following shell commands:


# cd /var/diskman/OSandBRmirror
# ./runstep 2

 The alternate boot is present in the output of last step. Note it down, filling the corresponding
item of section “Information to fill during disk configuration with Solaris Volume Manager”, in
Annex 5. Make sure that all information required in this section is completely filled in.

 Wait for all the disks to be synchronized, the following command will show all metadevices
that are currently being synchronized. Wait until none is being synchronized, in the end the
command will not return any results:
# metastat | grep -i resync

 After all disks became synchronized issue the following command:


# reboot

 IMPORTANT NOTE: Perform the next step only if there are mirrors for the spots_db*
partitions, defined in the internal disks.

 To create mirror for spots_db* on the internal disks execute:


 This doesn’t need to be executed on the Application Server in the Distributed Environment.
# cd /var/diskman/OSandBRmirror
# ./intDBmirror.ksh

130 E200613-01-115-V14.0I-34
 Verify the configuration as described in Sections 6.2.1.3 and 6.2.1.4.

 Use the output of the last step to fill in the corresponding items of section “Information to fill
after disk configuration with Solaris Volume Manager”, in Annex 5. Make sure that all
information required in this section is completely filled in.

 Disk configuration successfully completed

 IMPORTANT NOTE: now proceed to Chapter 7, SPOTS Configurations with External


Storage and perform the steps that are described there.

E200613-01-115-V14.0I-34 131
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

6.2 Maintenance Procedures

The maintenance procedures are the following:


• The mirror state monitoring, described in Section 6.2.1
This procedure is to be executed after the reception, by root user, of a Failure Notification
via Email, or when it is desired to check the mirroring configuration status.
• The replacement of damaged disks, described in Section 6.2.2
This procedure is to be executed after the detection of a mirroring disk failure
• The system boot with insufficient database replicas, described in Section 6.2.3
This procedure is to be executed when the system is unable to boot due to “insufficient
database replicas”.
• Section 6.2.7 describes how to detect and stop processes that are using a file system.

6.2.1 Monitoring Tasks

The monitoring of mirroring status can be done on user request and on system request.
Section 6.2.1.1 explains the Solaris Volume Manager concepts needed to understand the disk
mirroring.
Section 6.2.1.2 describes the system monitoring and how to configure it.
Sections 6.2.1.3 and 6.2.1.4 describe how to monitor the mirroring on user request.

6.2.1.1 Solaris Volume Manager Objects

The Solaris Volume Manager Objects are metadevices, state database replicas and hot spare
pools.
A metadevice is a name for a group of physical slices that appear to the system as a single, logical
device. Metadevices are actually pseudo, or virtual, devices in standard UNIX terms. They are used
to increase storage capacity and increase data availability. The metadevices are concatenations,
stripes, concatenated stripes, mirrors, RAID5 metadevices, and trans metadevices. SPOTS only
uses mirrors.
A state database is a database that stores information on disk about the state of your Solaris
Volume Manager configuration (records and tracks changes made to disk configuration). The
database is actually a collection of multiple, replicated database copies. Each copy, referred to as a
state database replica, ensures that the data in the database is always valid.
SPOTS does not use hot spare pools and so they are not described here.
Mirrors (or RAID 1) consist of at least two submirrors. The storage of data is duplicated in all
submirrors belonging to the same mirror. Submirrors are physical disk slices. Read performance is
improved since either slice can be read at the same time (if slices are in different disks). Write
performance is the same as for single disk storage. RAID 1 provides the best performance and the
best fault-tolerance in a multi-user system.
In Solaris Volume Manager, mirrors are made of one until three submirrors.
Mirrors and submirrors are metadevices.

6.2.1.2 Disk Failure Notification via Email

132 E200613-01-115-V14.0I-34
The mirroring status is checked periodically by the system. This check is done by the crontab job
/var/diskman/bin/dscheck.sh every 30 minutes.
If any problem is detected with the mirroring an email is issued to the root user with the status of all
Solaris Volume Manager Objects.
To change the default monitoring period or to stop the detection, edit the crontab as root user
issuing the command:
# crontab -e
Sections 6.2.1.3 and 6.2.1.4 explains the failure notification contents

The job can also be executed on user request with the following command:
# /var/diskman/bin/dscheck.sh
and if a failure occurred then the root user will receive an email

To send failure notification to other users, create and edit the address list file
/var/diskman/addresslist.txt and include one email address per line.
Example: To send the email to the local user spots and the remote user smith then the address list
file contains the following:
spots
smith@faraway.net

 Only local users that already exist can receive the email. For example, only after the SPOTS
software installation the user spots can be added to the address list. Before the installation
the user spots does not exist.
 To send mails for remote users it is necessary, first, to configure the sendmail application.

6.2.1.3 Verifying the Status of State Database Replicas

 Login as root user

 Execute the following shell command


# metadb -i

 Inspect the output for problems.


The flags in the front of the device name represent the device status. Uppercase letters
indicate a problem status. Lowercase letters indicate an “Okay” status.
Take note of all slices that have replicas with problems (ex: c1t10d0s1).
 IMPORTANT NOTE: If any of the database replicas shows a problem, then the metadevices
status must also be checked (if not yet checked). To check the metadevices status use the
procedure in Section 6.2.1.4. All the disks with problems must be replaced with the
procedure in Section 6.2.2.
Example of output with problems:
flags first blk block count
a m p luo 16 1034 /dev/dsk/c1t0d0s4
a p luo 16 1034 /dev/dsk/c1t8d0s4

E200613-01-115-V14.0I-34 133
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

a p luo 16 1034 /dev/dsk/c1t1d0s1


a p luo 16 1034 /dev/dsk/c1t9d0s1
a p luo 16 1034 /dev/dsk/c1t2d0s1
W p l 16 1034 /dev/dsk/c1t10d0s1
a p luo 16 1034 /dev/dsk/c1t3d0s1
W p l 16 1034 /dev/dsk/c1t11d0s1
W p l 16 1034 /dev/dsk/c1t4d0s1
a p luo 16 1034 /dev/dsk/c1t12d0s1
a p luo 16 1034 /dev/dsk/c1t5d0s3
a p luo 16 1034 /dev/dsk/c1t13d0s3

In the example, the slices c1t10d0s1, c1t11d0s1 and c1t4d0s1 have problems.

6.2.1.4 Verifying Status of Metadevices

 Login as root user

 Execute the following shell command


# metastat

 Inspect the output for problems.


If any Mirror Object shows state “Needs maintenance” for one submirror, take note of device
name (ex: c1t11d0s0).
If the two submirrors of one mirror show state “Needs maintenance” then data has potentially
been corrupted. In this case you must substitute the 2 disks and use the respective file
system backups.
 IMPORTANT NOTE: If any metadevice shows a problem, then the status of replicas must
also be checked (if not yet checked). To check the status of state database replicas use the
procedure in Section 6.2.1.3. All the disks with problems must be replaced with the
procedure in Section 6.2.2

Example of output with problems:


d60: Mirror
Submirror 0: d61
State: Okay
Submirror 1: d62
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 71101179 blocks

d61: Submirror of d60


State: Okay
Size: 71101179 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c1t3d0s0 0 No Okay

d62: Submirror of d60

134 E200613-01-115-V14.0I-34
State: Needs maintenance
Invoke: metareplace d60 c1t11d0s0 <new device>
Size: 71101179 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c1t11d0s0 0 No Maintenance

In the example, the slice c1t11d0s0 has problems.

6.2.2 Replacing mirroring disks

This Section describes how to replace mirrored disks. It must be executed after mirrored disk failure
detection.

 Login as root user


 This login must be done in the workstation and must be a root login (do not become super
user with the su command).

 Execute the command below to edit the crontab:


# crontab -e

 Stop the email notification, by placing a comment character “#” in the beginning of the line
containing the call to the dscheck.sh script, as shown below
# 0,30 * * * * /var/diskman/bin/dscheck.sh > /dev/null 2>&1

 Save and exit the crontab file.

 Obtain the disk ids of disks with problems on state database replicas as described in Section
6.2.1.3.

 Obtain the disk ids of disks with problems on metadevices as described in Section 6.2.1.4.
 If this procedure is being executed because the system could not boot (Section 6.2.3), than
add the disk ids found on that Section.

 Delete the entire database replicas contained in all the disks with problems in database
replicas and in all the disks with problems in metadevices.
See Annex 5 to obtain slice ids of database replicas contained in disks with problems.
To delete a state database replica refer to Section 6.2.4.2.

 Delete all submirrors contained in all disks with problems in database replicas and in all disks
with problems in metadevices.
See Annex 5 to obtain mirror and submirror ids of submirror slices contained in disks with
problems.
To delete a submirror refer to Section 6.2.4.4

 For Sun Blade host (non hot-plugable), shutdown the machine with command:
# reboot

E200613-01-115-V14.0I-34 135
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Sun Fire host is hot-plugable and so disks may be switched without shutting down the
system.

 Identify the disks to remove using the information filled in Annex 5

 Open the machine and remove the damaged disks

 Take note of the SN (serial number) number of the new disks and replace this information on
Annex 5.

 Insert the new disks in the slots chosen in last step and close the machine.
 For Sun Fire host (hot-plugable) it is necessary to wait at least 1 minute, after disk removal,
to insert the new disks.

 If the server is switched off, press the power button, wait for the login window and login as
root user

 Format the new disks using the disk partition information on Annex 5 with the procedure
described in Section 6.2.5.

 Create a file system for each slice on the new disks using the information on Annex 5 with
the procedure described in Section 6.2.6.

 Create all database replicas to be contained in the new disks.


See Annex 5 to obtain the slice ids of database replicas contained in new disks.
To create a state database replica refer to Section 6.2.4.1
 The database replicas to be created are the same deleted on the damaged disks

 Create all submirrors to be contained in the new disks.


Refer to Annex 5 to obtain mirror ids, submirror ids and slice ids of the slices contained in
the new disks.
To create a submirror, refer to Section 6.2.4.3.
 The submirrors to be created are the same deleted on the damaged disks

 Execute the command below to edit the crontab:


# crontab -e

 Activate the exception notification mechanism, by removing the comment character “#” in the
beginning of line containing the dscheck.sh script as shown below:
0,30 * * * * /var/diskman/bin/dscheck.sh > /dev/null 2>&1

 Save and exit the crontab file.

6.2.3 Booting system with insufficient database replicas

This section describes how to boot the system when, at booting time, is reported the message
“Insufficient metadevice database replicas located”.

136 E200613-01-115-V14.0I-34
This situation means that one or more disks are damaged and that the system cannot boot without
maintenance.
To boot the system execute the following steps:

 Enter maintenance mode, typing the root password.

 Take note of the damaged replicas executing the procedure in Section 6.2.1.3.

 For each damaged database replica, execute the procedure in Section 6.2.4.2 ignoring the
possible “read-only” messages that can appear during the procedure execution.

 Verify if there are no more damaged database replicas, by executing the procedure in
Section 6.2.1.3 and take note of the damaged disks.

 Reboot system with the following command:


# reboot

 Login as root user

 Replace the damaged disks executing procedure in Section 6.2.2 adding the damaged disks
ids found with this procedure with the ones obtained with the procedure 6.2.2.

6.2.4 Creating and Deleting Solaris Volume Manager Objects

This section describes how to create and delete Solaris Volume Manager objects. The procedures
in this section are to be executed as part of other maintenance tasks.

6.2.4.1 Creating a Solaris Volume Manager State Database Replica

 IMPORTANT NOTE: Before creating a state database replica, the filesystem must be
created (with newfs command) and the partitions must be permanently un-mounted (using
the umount command and removing the mountpoint from /etc/vfstab file). When creating the
replicas using the procedure for disk substitutions (Section 6.2.2) the partitions are already
un-mounted.

 Login as root user

 Execute the command below:


# metadb -a <slice>
Where <slice> must be replaced by the slice id.
Example:
# metadb -a /dev/dsk/c0t2d0s0

 Verify the database replica creation by examining the output of the command
# metadb -i
In the output, the flag “a” (meaning “active”) must be present on the created replica as in the
example below

E200613-01-115-V14.0I-34 137
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

flags first blk block count


...
a u 16 1034 /dev/dsk/c0t2d0s0

6.2.4.2 Removing a Solaris Volume Manager State Database Replica

 Login as root user

 Execute the command below:


# metadb -d <slice>
where <slice> must be replaced by the slice id.
Example:
# metadb -d /dev/dsk/c0t2d0s0

 Verify the database replica deletion by examining the output of command


# metadb –i

6.2.4.3 Creating a Solaris Volume Manager submirror

 Login as root user

 Execute the commands below:


# metainit <submirror> 1 1 <slice>
# metattach <mirror> <submirror>
where <mirror>, <submirror> and <slice> must be replaced by the corresponding ids.
Example:
# metainit d72 1 1 c1t8d0s7
# metattach d70 d72

 After attaching the submirror to the mirror, the newly attached submirror starts the
synchronization process with the other submirror. The synchronization time depends on the
slice size and on system hardware and could take hours. Only when the to submirrors are
synchronized, the mirror can assure data redundancy.

6.2.4.4 Removing a Solaris Volume Manager submirror

 IMPORTANT NOTE: The capability for data redundancy is lost while the mirror is a one-way
mirror.

 Login as root user

 Execute the commands below:


# metadetach -f <mirror> <submirror>
# metaclear <submirror>
where <mirror> and <submirror> must be replaced by the corresponding metadevice ids.

138 E200613-01-115-V14.0I-34
Example:
# metadetach -f d70 d72
# metaclear d72

6.2.4.5 Removing a Solaris Volume Manager Mirror and Submirrors

 To remove mirroring from filesystems root (/), swap or /opt (file-systems that can not be un-
mounted) execute the procedure of Section 6.2.4.6, instead.
 Consult the information on Annex 5 and obtain the mirror id and the sub-mirrors ids and
mount-points.
 Verify that a current backup of the metadevice exists. Operation errors may cause data loss.

 Login as root user


 This login must be done in the workstation and must be a root login (do not become super
user with the su command).
 Only the root user can be logged to the machine (logout all remote users).

 Stop SPOTS as described in Section 4.1.

 Stop all access to the metadevice. Verify and stop metadevice access with procedure in
Section 6.2.7

 Confirm the mirror and submirror ids issuing the command


# metastat <metadevice>
Where <metadevice> is the mirror id
Example:
# metastat d70

 Unmount the filesystem where resides the mirror to be removed, using umount command.
Example: to remove the /export/home filesystem
# umount /export/home

 Delete the submirror that does not contain the word “mirror” in the mountpoint field of disk
table on Annex 5 (example: /export/home) using the commands below:
# metadetach -f <mirror> <submirror>
# metaclear <submirror>
# metaclear <mirror>
where <mirror> and <submirror> must be replaced by the corresponding metadevice ids.
Example:
# metadetach -f d70 d71
# metaclear d71
# metaclear d70

 Edit the /etc/vfstab file and perform the following actions:

E200613-01-115-V14.0I-34 139
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

1. Uncomment the line (remove the “#” character from the beginning of the line) containing the
mountpoint and the slice.
2. Delete the line containing the metadevice and the mountpoint.
Example: for /export/home mirror, the line:
#/dev/dsk/c1t0d0s4 /dev/rdsk/c1t0d0s4 /export/home ufs 2 yes -
Should be changed to:
/dev/dsk/c1t0d0s4 /dev/rdsk/c1t0d0s4 /export/home ufs 2 yes –
And the lines below should be deleted:
# home mirror
/dev/md/dsk/d70 /dev/md/rdsk/d70 /export/home ufs 2 yes –

 Mount the filesystem with the command mount


Example: for /export/home filesystem use the command:
# mount /export/home

 Delete the second submirror, the one containing the word “mirror” in the mountpoint field of
disk table on Annex 5 (example: /home_mirror), using the commands below
# metaclear <submirror>
Where <submirror> must be replaced by the corresponding metadevice id.
Example:
# metaclear d72

 Create the mountpoint directory, for this last submirror, using the command below and the
information on Annex 5.
# mkdir <mountpoint>
where <mountpoint> is the mountpoint directory
Example: if the mountpoint is /home_mirror then issue the command:
# mkdir /home_mirror

 Edit the /etc/vfstab file and uncomment the line (remove the “#” character from the beginning
of the line) that contains the mountpoint and the slice.
Example: for /home_mirror partition, the line:
#/dev/dsk/c1t1d0s4 /dev/rdsk/c1t1d0s4 /home_mirror ufs 2 yes -
Should be changed to:
/dev/dsk/c1t1d0s4 /dev/rdsk/c1t1d0s4 /home_mirror ufs 2 yes -

 Mount the filesystem with the command mount


# mount <filesystem>
Where <filesystem> is the filesystem to mount
Example: for /home_mirror filesystem use the command:

140 E200613-01-115-V14.0I-34
# mount /home_mirror

6.2.4.6 Unmirroring a File System That Cannot Be Unmounted

 This procedure is only for filesystems that cannot be unmounted, such as root (/), swap or
/opt. For other filesystems execute the procedure of Section 6.2.4.5, instead.
 Consult the information on Annex 5 and obtain the mirror id and the sub-mirrors ids and
mount-points.
 Verify that a current backup of the metadevice exists. Operation errors may cause data loss.

 Login as root user


 This login must be done in the workstation and must be a root login (do not become super
user with the su command).

 Confirm the mirror and submirror ids issuing the command


# metastat <metadevice>
Where <metadevice> is the mirror id
Example:
# metastat d100

 Remove the submirror that does not contain the word “mirror” in the mountpoint field of disk
table on Annex 5 (example: /opt) from the mirror using the command below:
# metadetach -f <mirror> <submirror>

where <mirror> and <submirror> must be replaced by the corresponding metadevice ids.
Example:
# metadetach -f d100 d101

 For swap or /opt filesystems, edit the /etc/vfstab file and following perform the following
actions:
1. Uncomment the line (remove the “#” character from the beginning of the line) that
contains the mountpoint point and the slice.
2. Delete the line containing the metadevice and the mountpoint.
Example: for swap mirror, the lines:
#/dev/dsk/c1t0d0s1 - - swap - no –
# swap mirror
/dev/md/dsk/d90 - - swap - no -

Should be changed to:


/dev/dsk/c1t0d0s1 - - swap - no -

 For root (/) filesystem execute the command:


# metaroot /dev/dsk/<slice>
where <slice> is slice id of the root filesystem (/) on Annex 5.

E200613-01-115-V14.0I-34 141
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Example:
# metaroot /dev/dsk/c0t3d0s0

 Reboot the system with command:


# reboot

 Login as root user

 Remove the mirror executing command:


# metaclear <metadevice>
Where <metadevice> is the mirror id
Example:
# metaclear d100

 Delete the two submirrors using the commands below


# metaclear <submirror1>
# metaclear <submirror2>

Where <submirror1> and <submirror2> must be replaced by the corresponding metadevice


ids.
Example:
# metaclear d101
# metaclear d102

 Create the mountpoint directory, for submirror that has the word “mirror”, using the command
below and the information on Annex 5.
mkdir <mountpoint>
where <mountpoint> must be replaced by the moutpoint name
Example: if the mount point is /root_mirror then issue the command:
mkdir /root_mirror

 Edit the /etc/vfstab file and uncomment the line (remove the “#” character from the beginning
of the line) that contains the mountpoint point and the slice.
Example: for /root_mirror partition, the line:
#/dev/dsk/c1t1d0s6 /dev/rdsk/c1t1d0s6 /root_mirror ufs 2 yes -
Should be changed to:
/dev/dsk/c1t1d0s6 /dev/rdsk/c1t1d0s6 /root_mirror ufs 2 yes -

 Mount the filesystem with the command mount


# mount <mountpoint>
where <mountpoint> must be replaced by the moutpoint name
Example: for /root_mirror filesystem use the command:

142 E200613-01-115-V14.0I-34
# mount /root_mirror

6.2.5 Formatting Disks

This procedure is to be executed as part of the disk replacement procedure. Do not execute it as a
generic format procedure.
 To format a disks, the information saved in Annex 5 is needed in order to reproduce the
slices of the damaged disk in the new one.
 It is not necessary to add new information to Annex 5 upon a new disk formating.

 Login as root user

 Start the format utility


# format

 A list of available disks is displayed.

 Compare the disk geometry of the disk to format with the original one (refer to Annex 5).

 Enter the number of the disk to repartition from the list displayed.
Specify disk (enter its number): <disk-number>

 <disk-number> is the number of the disk to repartition.


 When replacing a disk then the disk id (ex: c1t3d0) of the new disk is the same of the old disk

 Go into the partition menu (which allows to set up the slices).


format> partition

 Start the modification process.


partition> modify

 Set the disk to all free hog.


Choose base (enter number) [0]? 1

 Create a new partition table by answering y when prompted to continue.


Do you wish to continue creating a new partition table based on above table[yes]? y

 Identify the free hog partition (slice).


 The free hog partition must be one of the slices containing a Solaris Volume Manager state
database replica (a slice that contains a /replica mount point).

 Enter the slice sizes when prompted (ignore the slices tag and flag values).
 If the new disk has the same geometry of the original one then the slice sizes must be
entered in cylinders (ex: 1234c). If it is not of the same geometry then enter the size of each
partition in megabytes adding 3 megabytes to the size of the original partition. To know the

E200613-01-115-V14.0I-34 143
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

new disk geometry, execute the format command. The damaged disk geometry is stored in
Annex 5.

 The new partition table is now displayed.

 Make the displayed partition table the current partition table by answering “y” when asked.
Okay to make this the current partition table[yes]? y
To change the current partition table, answer no and go back to start the modification
process.

 Name the partition table.


Enter table name (remember quotes): "<partition-name>"
<partition-name> is any name you choose for the new partition table. Example: “disk6”.

 Label the disk with the new partition table when you have finished allocating slices on the
new disk.
Ready to label disk, continue? yes

 Quit the partition menu.


partition> q

 To partition more disks issue the command bellow, and the return to the “Go into the partition
menu” step.
format> disk

 Quit the format menu.


format> q

6.2.6 Creating File Systems

This procedure is to be executed as part of the disk replacement procedure. Do not execute it as a
generic procedure to create filesystems.
 To create a new filesystem, the information saved in Annex 5 must be used.

 Login as root user

 Create a file system for each slice with the newfs command.
# newfs /dev/rdsk/<cwtxdysz>
where <cwtxdysx> is the raw device for the file system to be created.
Answer y when prompted.
 Ignore the unallocated cylinders warning.
Example: To create a filesystem for slice c1t8d0s7 then use the command:
# newfs /dev/rdsk/c1t8d0s7

144 E200613-01-115-V14.0I-34
newfs: /dev/rdsk/c1t8d0s7 last mounted as /export/home
newfs: construct a new file system /dev/rdsk/c1t8d0s7: (y/n)? y
/dev/rdsk/c1t8d0s7: 2088746 sectors in 723 cylinders of 27 tracks, 107 sectors
1019.9MB in 46 cyl groups (16 c/g, 22.57MB/g, 10816 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 46368, 92704, 139040, 185376, 231712, 278048, 324384, 370720, 417056,
463392, 509728, 556064, 602400, 648736, 695072, 741408, 787744, 834080,
880416, 926752, 973088, 1019424, 1065760, 1112096, 1158432, 1204768, 1251104,
1297440, 1343776, 1390112, 1436448, 1479200, 1525536, 1571872, 1618208, 1664544,
1710880, 1757216, 1803552, 1849888, 1896224, 1942560, 1988896, 2035232, 2081568

E200613-01-115-V14.0I-34 145
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

6.2.7 Detect and terminate processes that are using a filesystem

This procedure is to be executed as part of the mirroring configuration and disk replacement
procedures. Do not execute it as a generic procedure.
To detect the processes that are using a filesystem do the following:

 Execute the command below:


# cat /etc/vfstab
Output example:
# cat /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
#/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 yes -
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/dsk/c1t1d0s1 - - swap - no -
/dev/dsk/c1t1d0s3 /dev/rdsk/c1t1d0s3 / ufs 1 no -
/dev/dsk/c1t1d0s7 /dev/rdsk/c1t1d0s7 /export/home ufs 2 yes -
/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /home_mirror ufs 2 yes -
/dev/dsk/c1t1d0s4 /dev/rdsk/c1t1d0s4 /opt ufs 2 yes -
/dev/dsk/c1t2d0s4 /dev/rdsk/c1t2d0s4 /opt_mirror ufs 2 yes -
/dev/dsk/c1t1d0s0 /dev/rdsk/c1t1d0s0 /replica1 ufs 2 yes -
/dev/dsk/c1t1d0s6 /dev/rdsk/c1t1d0s6 /replica2 ufs 2 yes -
/dev/dsk/c1t2d0s1 /dev/rdsk/c1t2d0s1 /replica3 ufs 2 yes -
/dev/dsk/c1t2d0s7 /dev/rdsk/c1t2d0s7 /replica4 ufs 2 yes -
/dev/dsk/c1t2d0s3 /dev/rdsk/c1t2d0s3 /root_mirror ufs 2 yes -
/dev/dsk/c1t1d0s5 /dev/rdsk/c1t1d0s5 /var/opt ufs 2 yes -
/dev/dsk/c1t2d0s5 /dev/rdsk/c1t2d0s5 /var_opt_mirror ufs 2 yes -
/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /swap_mirror ufs 2 yes -
swap - /tmp tmpfs - yes -
/dev/dsk/c2t1d0s1 /dev/rdsk/c2t1d0s1 /spots_db1 ufs 2 yes -
/dev/dsk/c2t4d0s0 /dev/rdsk/c2t4d0s0 /spots_db2 ufs 2 yes -
/dev/dsk/c2t2d0s0 /dev/rdsk/c2t2d0s0 /spots_db3 ufs 2 yes -
/dev/dsk/c2t5d0s0 /dev/rdsk/c2t5d0s0 /spots_db4 ufs 2 yes -
/dev/dsk/c2t3d0s0 /dev/rdsk/c2t3d0s0 /spots_db5 ufs 2 yes -
/dev/dsk/c2t6d0s0 /dev/rdsk/c2t6d0s0 /spots_db6 ufs 2 yes -

 The output of the last step contains devices and mountpoints. Take note of the filesystem
device (the value at the first column) of the filesystem mountpoint (the value at the third
column) that is to be checked.
Example:
To check the usage of filesystem /export/home then, in the output of the example above, take
note of the value /dev/dsk/c1t1d0s7

 Execute the command below:


# fuser <device>
Where <device> is the filesystem device to check.

146 E200613-01-115-V14.0I-34
Output example:
# fuser /dev/dsk/c1t0d0s7
/dev/dsk/c1t0d0s7: 614o

 The output of command last command shows the list of process identifiers (each one
followed by a letter) that are using the filesystem. If no process identifier was shown, then no
process is using the filesystem and your detection is finished. If the list is not empty then
continue the detection.

 For each process identifier obtained execute the steps below:

 Execute the following command to know the process name:


# ps -e -o pid,comm | grep <process-id>
Where <process-id> is the process identifier.
Output example:
# ps –e –o pid,comm | grep 614
1221 grep 614
614 rpc.ttdbserverd

 In the output of the last step are the process identifier and the process name of the
service to be stopped. If there is more than one line of output, than the correct line is
the one that does not have the grep command.
 If you don’t know how to stop the service execute the command
# kill <process-id>
Where <process-id> is the process identifier.
Then verify is process was terminated executing the last step. If the output
shows the process than execute the command:
# kill -9 <process-id>
Where <process-id> is the process identifier.

 Verify that all processes terminated successfully by executing the command below:
# fuser <device>
Where <device> is the filesystem device to check.
If the output does not show an empty list then execute the whole procedure again
(procedure referred in section 6.2.7).

E200613-01-115-V14.0I-34 147
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

7 SPOTS Configurations with External Storage

 IMPORTANT NOTE: this section only applies to the following hardware configurations:
• Medium A (on SUN-Fire-V445)
• Medium B (on SUN-Fire-V490)
• Medium C (on SUN SPARC Enterprise M3000 Server)
• Medium D (on SUN SPARC Enterprise M3000 Server)
• Medium Legacy
• Large A (on SUN-Fire-V445)
• Large B (on SUN-Fire-V490)
• Large C (on SUN SPARC Enterprise M3000 Server)
• Large D (on SUN SPARC Enterprise M3000 Server)
• Large Legacy
as described in section 2.3-Platform Hardware & Standard Software.

 IMPORTANT NOTE: before proceeding with External Configuration of the SUN External
disk arrays, first you must proceed with the steps described in chapter 6 - Fault
Tolerance with Disk Mirroring. Return to this chapter only when the steps described in
chapter 6 - Fault Tolerance with Disk Mirroring are completed.

This sub-chapter describes the installation procedure of the Sun External disk arrays for the six
Spots configurations. You should follow these main steps:

 Phisical Connections (Step 1)

 Installing External Array Storage (Step 2)

 External Storage Configurations and Hard Disk Partitioning (Step 3)

148 E200613-01-115-V14.0I-34
7.1 Physical Connections (step 1)

7.1.1 Medium A/B/C/D and Medium B1 Legacy Configuration – Single Server

This configuration applies to:


• Medium A/B: SUN Fire V445/V490 - Single Server + 1 StorEdge 3320 array
• Medium B1 Legacy: SUN Fire V440 - Single Server + 1 StorEdge 3320 array
• Medium C: Sun SPARC Enterprise M3000 Server – Single Server + 1 StorageTek St2540
Array
• Medium D: Sun SPARC Enterprise M3000 Server – Single Server + 1 StorageTek St2540
Array + 1 StorageTek St2501 Array Expansion Kit

For details on Medium configuration, consult section 2.3.1.2 - Single Server Environment and
Figure 1, Small and Medium Configurations, Single Server Environment

 Throughout this section Host Server or Host must be interpreted as the Sun Host Connecting
to the external array (Single Server).

 The cable configuration used is a raid dual bus configuration (for performance and
reliability issues).

In order to proceed, the SCSI jumper cable (small SCSI/LVD cable that comes with the StorEdge
3320) must connect the Channel 2 (CH2) to the Dual Bus Buff Conf Output, as the following figure
shows:

Storedge 3320

Figure 8, Cable Configuration for StorEdge 3320

Medium A/B or Medium B1 Legacy - For completing the procedure:

E200613-01-115-V14.0I-34 149
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Connect the Sun StorEdge 3320 (using Channel 1 (CH1)) port to the first port on the Sun StorEdge
PCI Dual Ultra3 SCSI Host Adapter on the Host using a SCSI/LVD cable.
Connect the Sun StorEdge 3320 (using Channel 3 (CH3)) port to the second port on the Sun
StorEdge PCI Dual Ultra3 SCSI Host Adapter on the Host using a SCSI/LVD cable.

It is advisable (in order to resolve problems with better response time) to connect the StorEdge
3320 to the Sun Fire host using the supplied serial cable.

Medium C - For completing the procedure (in case of the external array is one StorageTek
ST2540):
Connect the Sun StorageTek ST2540 ports 2 and 3 of the first controller to the first available free
port of the Fibre Channel card on the M3000 host using a Fibre Channel cable. As depicted by the
figure bellow, Red Lines.

Connect the Sun StorageTek ST2540 ports 2 and 3 of the second controller to the second available
free port of the Fibre Channel card on the M3000 host using a Fibre Channel cable. As depicted by
the figure bellow, Red Lines.

Connect one of the Raid Controllers Network Ethernet card to the same network where the 3rd
Ethernet controller of the Sun SPARC Enterprise M3000 Server was connected to, as depicted by
the figure bellow, Blue Line.

Connect the Ethernet network interface of one of the raid controllers to the 3rd Ethernet network
interface of the Sun SPARC Enterprise M3000 Server.

The configuration bellow depicts the Medium Configuration C on the M3000.

Figure 9, StorageTek ST2540 Medium C, configuration

150 E200613-01-115-V14.0I-34
Medium D - For completing the procedure (in case of the external array are one StorageTek
ST2540 and one StorageTek ST2501):
Follow the procedure with title “For completing the procedure (in case of the external array is
StorageTek ST2540 and a JBOD):” and depicted in Figure 11. In configuration Medium D,
instead of connecting a JBOD to the StorageTek ST2540, you’ll have to connect a StorageTek
ST2501.

E200613-01-115-V14.0I-34 151
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

7.1.2 Large and Large B1 Legacy Configuration – DB Server

This configuration applies to:


• Large A/B: SUN Fire V445/V490 - Distributed Server + 2 StorEdge 3320 arrays
• Large B1 Legacy: SUN Fire V440 - Distributed Server + 2 StorEdge 3320 arrays
• Large C: SUN M3000 - Distributed Server + 1 StorageTek St2540 Array + JBOD
• Large D: Sun SPARC Enterprise M3000 Server – Single Server + 1 StorageTek St2540
Array + 1 StorageTek St2501 Array Expansion Kit

These configurations have a StorEdge 3320 array (Master) and a StorEdge 3320 array (Slave or
JBOD – Just a Bunch of Disks).
For details on Large configuration consult section 2.3.1.3 - Distributed Environment Large and
Figure 2, Large Configuration, Distributed Environment.

 Throughout this section Host Server or Host must be interpreted as the Sun Fire V445 ( DB
Server ).

The cable configuration used is a raid dual bus configuration.


For the cable configuration for the Storedge 3320 (Master) please proceed as it is described in the
section 7.1.1 - Medium A/B/C/D and Medium B1 Legacy Configuration – Single Server
For the cable configuration of the JBOD (Just a Bunch of Disks) it is necessary to connect a
SCSI/LVD cable between Channel 1 Port B on the JBOD to Channel 0 port on the StorEdge 3320
(Master) device, and another SCSI/LVD cable between Channel 2 (on the JBOD) and SINGL BUS
CONF port on the Sun StorEdge 3320 (Master), as the following figure shows:

152 E200613-01-115-V14.0I-34
Storedge 3320

JBOD

Figure 10, Cable Configuration for StorEdge 3320 (Master) with JBOD

It is advisable (in order to resolve problems with better response time) to connect the StorEdge
3320 (Master) to the Sun Fire host using the supplied serial cable.

Large C - For completing the procedure (in case of the external array is StorageTek ST2540
and a JBOD):
Connect the Sun StorageTek ST2540 ports 2 and 3 of the first controller to the first available free
port of the Fibre Channel card on the M3000 host using a Fibre Channel cable. As depicted by the
figure bellow, Red Lines.

Connect the Sun StorageTek ST2540 ports 2 and 3 of the second controller to the second available
free port of the Fibre Channel card on the M3000 host using a Fibre Channel cable. As depicted by
the figure bellow, Red Lines.

Connect one of the Raid Controllers Network Ethernet card to the same network where the 3rd
Ethernet controller of the Sun SPARC Enterprise M3000 Server was connected to, as depicted by
the figure bellow, Blue Line.

Connect the SAS Expansion ports of the Sun StorageTek ST2540 to the JBOD SAS ports, has
depicted by the Green lines on the figure bellow.

The configuration bellow depicts the Large Configuration C on the M3000.

E200613-01-115-V14.0I-34 153
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 11, StorageTek ST2540 with JBOD, Large C configuration

Large D - For completing the procedure (in case of the external array are one StorageTek
ST2540 and one StorageTek ST2501):
Follow the procedure with title “For completing the procedure (in case of the external array is
StorageTek ST2540 and a JBOD):” and depicted in Figure 11. In configuration Medium D,
instead of connecting a JBOD to the StorageTek ST2540, you’ll have to connect a StorageTek
ST2501.

7.2 Installing External Array Software (step 2)

7.2.1 Sun StorEdge 3320 SCSI Array Software

154 E200613-01-115-V14.0I-34
 Insert the SPOTS Performance Management V14.0 Core DVD.

 Execute the following script to install and configure the SUN StorEdge 3320
software:
# /cdrom/cdrom0/storedge/install_3320.sh

 Change the password for the following users:


# passwd ssmon
# passwd ssadmin
# passwd ssconfig

7.2.2 Sun StorageTek ST2540 Common Array Software (CAM)

To install the StorageTek ST2540 CAM follow the instructions bellow. Only the steps for a
basic configuration are here presented.

 Activate the multipathing for all Fiber Channel ports on the host. This operation will
require a reboot. Accept the default option and press Return.

# stmsboot –e –D fp

 To have an out-of-band connection between the local management host and the
array controllers, both management host and array controller must have Ethernet
interfaces with IP addresses of the same network. Sun array controllers are
shipped with the following default IP addresses:

 Ethernet port 1 of Controller A is assigned IP address 192.168.128.101

 Ethernet port 1 of Controller B is assigned IP address 192.168.128.102


The IP addresses of Controller A and Controller B can be changed, if needed, in
the CAM. They can be also assigned dynamically using DHCP protocol and also
through a serial port connection. These procedures are not described here. You
can consult the “Sun Storage TekTM 2500 Series Array Hardware Installation Guide”
and also “Sun Storage TekTM Common Array Manager Software Installation Guide”.
The only thing needed in order to have connection between array controllers and
the host is to configure at least one of the Ethernet interfaces of the host with an IP
address belonging to the sub-network 192.168.128.xxx. This is done as shown
below:

# ifconfig bge2 plumb


# ifconfig bge2 192.168.128.11 netmask 255.255.255.0 up
# echo “192.168.128.11” > /etc/hostname.bge2
# printf "192.168.128.0\t255.255.255.0\n" >> /etc/netmasks
# route add –net 192.168.128.0 –netmask 255.255.255.0 192.168.128.11
1

E200613-01-115-V14.0I-34 155
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Insert the CDROM that comes with the ST2540 on the appropriate device.

 Proceed now with the installation of the CAM software (current recommended
version of the CAM is 6.2.0.13 or later). Run the following commands (a SSH
Session with X11 display forwarded will be needed). Notice that the file
host_sw_6.2.0.13.tar.gz can be in other place in the CDROM.

 Check which version is the CAM provided in the CDROM. If you don’t have the version of the
CAM 6.2.0.13 or later, you shall get it from
http://www.sun.com/storage/management_software/resource_management/cam/get_it.jsp.

 The documentation is also available at http://docs.sun.com/app/docs/prod/stor.arrmgr#hic.

# cp /cdrom/cdrom0/host_sw_solaris_6.2.0.13.tar.gz /var
# cd /var
# gunzip host_sw_solaris_6.2.0.13.tar.gz
# tar xvf host_sw_solaris_6.2.0.13.tar
# cd HostSoftwareCD_6.2.0.13
# ./RunMe.bin

Figure 12, CAM welcome screen

156 E200613-01-115-V14.0I-34
Figure 13, CAM License Agreement

Figure 14, CAM installation type

E200613-01-115-V14.0I-34 157
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 15, CAM installation review

Figure 16, CAM installation finished with success

158 E200613-01-115-V14.0I-34
 Before the first access to the CAM software, run the commands bellow.

# svccfg –s svc:/system/webconsole setprop options/tcp_listen = true


# smcwebserver restart

 Access the CAM software using a browser and loading the URL https://cam-
management-host:6789. Please notice:

 Replace the cam-management-host in the URL above with the IP address of


the management host.

 Access to port 6789 must be allowed. Firewall rules might need changes.

 In the next image you can see the authentication web page that allows you to
access the CAM. Type the user name and password of the account used to install
the CAM software.

Figure 17, CAM authentication web page

E200613-01-115-V14.0I-34 159
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 18, CAM first login

 Select the option “Sun StorageTekTM Common Array Manager”. The following form
will be presented. Fill in the form and “Save and Continue Setup”.

160 E200613-01-115-V14.0I-34
Figure 19, CAM site information form

E200613-01-115-V14.0I-34 161
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 20, CAM site information form saved successfully

 You can now proceed to the registration of the arrays and skip the “Auto Service
Request (ASR) Setup”. As can be seen in the image bellow there’s an error
message stating that the ASR registration failed. Ignore this message and select
the option “Register”.

Figure 21, CAM Storage System Summary

 Accept all options as shown in the image bellow and select the option “Next”. The
auto discovery starts searching the local network for Storage Systems.

162 E200613-01-115-V14.0I-34
Figure 22, CAM Registering the Storage System

E200613-01-115-V14.0I-34 163
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 23, CAM Auto Discovery of Storage Systems

 The result of the scan is presented with some details about the Storage System
found. Select “Finish”.

164 E200613-01-115-V14.0I-34
Figure 24, CAM List of the Storage Systems Discovery

 The Storage System registration starts and its status is displayed. When this
process is completed, select “Close”.

E200613-01-115-V14.0I-34 165
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 25, CAM status of the Storage Systems registration

166 E200613-01-115-V14.0I-34
Figure 26, CAM Storage Systems summary

 The last step is to ensure that the firmware version of the Sun StorageTekTM
ST2540 is 07.35.10.10 or later. In case it is not the referred version, it must be
upgraded. Check the current version of the firmware in the page of the CAM that
shows all the Storage Systems Available. You can find it by selecting “Storage
Systems”, and all storage systems available will be displayed. Check the image
above.
In case the firmware version of the Sun StorageTekTM ST2540 is older than
07.35.10.10 then check the check box of the storage array you want to upgrade the
firmware version and select “Install Firmware Baseline”. Go through all the steps
accepting the defaults. Make sure that in the last step there is no error message
presented.

E200613-01-115-V14.0I-34 167
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 27, CAM Storage Systems firmware upgrade

168 E200613-01-115-V14.0I-34
7.3 External Storage Configuration and Hard Disk Partitioning (step 3)

 Login as root user.

 Verify the configuration as described in Sections 6.2.1.3 and 6.2.1.4.

 Use the output of the last step to fill in the corresponding items of section
“Information to fill after disk configuration with Solaris Volume Manager”, in Annex
5. Make sure that all information required in this section is completely filled in.

 Configure the remaining SPOTS database partitions that are managed by


hardware, by executing the following step for configurations:
o Medium

 Go to Annex 7 and perform the steps described there in order to configure the
SPOTS database partitions that are managed by hardware.
o Large

 Go to Annex 8 and perform the steps described there in order to configure the
SPOTS database partitions that are managed by hardware.

o Medium Legacy

 Go to Annex 9 and perform the steps described there in order to configure the
SPOTS database partitions that are managed by hardware.
o Large Legacy

 Go to Annex 10 and perform the steps described there in order to configure the
SPOTS database partitions that are managed by hardware.

E200613-01-115-V14.0I-34 169
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

8 Installing Oracle Software

 Login as root user.

 To install Oracle 10g Enterprise Edition, please insert the “Oracle Installation
Packages” Media in the DVD drive and run the script install.sh as root user.

# cd /cdrom/cdrom0/10.2.0.3_EE/
# sh install.sh

The script will automatically prepare the system for an oracle installation, like updating kernel
parameters, creating oracle user and groups and finally install Oracle10g as UNIX packages with
the pkgadd command. So no user interaction is required.

 Reboot the system by executing the command:


# /etc/shutdown –y –i6 –g0

 After the reboot verify if the listener process is running, and if this is the case, kill it.
Execute the following commads:
# ps –ef | grep LISTENER

 The output should be as in the following example:


oracle 651 1 0 12:28:48 ? 0:02
/opt/oracle/product/10.2.0/db_1/bin/tnslsnr LISTENER -inherit

 Get the process PID (in the example 651) and kill the listener process, as is
depicted in the following example:
# kill 651

 Continue to the next step in the installation process.

170 E200613-01-115-V14.0I-34
8.1 Removing Oracle Software
De-installing Oracle10g can be achieved with the pkgrm command.

 Login as root user.

 Execute the following command to remove Oracle from the system:

# pkgrm ORA10EE
(…)
(…)
# rm –r /opt/oracle
# rm –r /var/opt/oracle

E200613-01-115-V14.0I-34 171
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

9 Installing SPOTS Software (Solaris environment)

9.1 Structure of Installation Procedure


This chapter deals with the installation of SPOTS PMS (including RTA) and PMC components on
Solaris environments. For the installation of SPOTS PMC on Windows environments, refer to
Chapter 10.
 Before proceeding, make sure you have read Chapter 2 which provides an overview of
SPOTS PMS and PMC components and the associated deployment options.
Specific recommendations concerning NIS are presented in Section 9.2.
Indications on how to choose the set of packages to be installed on each host (SPOTS
Performance Management V14.0 Core), and their sequence of installation, are presented in
Section 9.3.
The subsequent Section 9.4 deals with the description of the front-end available for SPOTS
packages installation.
The remaining sections of the chapter (9.5 - System configuration issues, 9.6 - SPOTS
Licensing Software; 9.7 - Real-Time Configuration issues), present configuration actions that
must be performed (in this order) after installing the SPOTS SW.

 Once all SPOTS components are installed refer to Annex 2 for configuring the SPOTS
domains and to Annex 3 for verifying and adjusting the SPOTS server properties.

172 E200613-01-115-V14.0I-34
9.2 NIS / NIS+ or LDAP Users and Groups Requirements
If UNIX users and groups are managed with NIS/NIS+, the SPOTS installation process leaves the
creation of the necessary user spots and groups (pmadmin and pmuser) in the hands of the system
administrator. However, some requirements must be ensured:
Use the following attributes for user “spots”:
• Primary group: pmadmin
• Secondary group: dba
• Login shell: ksh
• Home directory: <base_home_dir>/<user_login>. Typical value for
<base_home_dir> is “/export/home”.

At several points during package installation, the installation program checks for the usage of
NIS/NIS+ groups. If no NIS/NIS+ groups are detected then no action is required; otherwise the
following message is displayed:
NIS/NIS+ groups detected.
Please check if groups ‘pmuser’ and ‘pmadmin’ are defined in NIS/NIS+ before proceeding
Press <return> key to continue
The system administrator performing the installation must guarantee that those groups exist before
proceeding with installation.
At several points during package installation, the installation program checks for the usage of
NIS/NIS+ users. If no NIS/NIS+ users are detected then no action is required; otherwise the
following message is displayed:
NIS/NIS+ users detected.
Please check if user ‘spots’ is defined in NIS/NIS+ before proceeding
Press <return> key to continue
If UNIX users and groups are managed with NIS/NIS+, make sure the ‘spots’ user is created
according to the above, before proceeding with installation by pressing the “<return>” key.

In LDAP environments, SPOTS regular users and groups accounts must be created before
proceeding with software installation. They must exist on LDAP server and the local LDAP Client
has to be configured correctly. Only under these circumstances the installation scripts will detect
the previous created users and groups under LDAP. For more information about local LDAP client
setup please refer to Annex 10 – Setting up LDAP client in Solaris.

An exception to the rule defined in the previous paragraph for LDAP enviroments is for the user
“oracle” that is already created in the SPOTS Server machine (Single Server or Database Server)
during the installation of Oracle software and also for the user “spots” created during the installation
of the SPOTS PM* packages.

E200613-01-115-V14.0I-34 173
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

9.3 SPOTS software - Choice of packages for V14.0 Core

This section specifies the set of packages to be installed and its sequence (during SPOTS server
installation), for each of the SPOTS V14 supported HW configurations (presented in Section 2.3).

 For other HW configurations, contact your local Nokia Siemens Networks


representative.

For Single Server installations: (Small and Medium configurations)

Order of Order of
Package Short
Package Long Name Package Package
Name
Installation Uninstallation
spotsTKmodul
SPOTS Perl TK Module
e 1 9

spotsNS SPOTS Naming Server 2 8

spotsDB SPOTS Database 3 7

spotsDS SPOTS Database Server 4 6


spotsJRE SPOTS JRE
5 5

spotsAS SPOTS Application Server 6 4

spotsTPBase SPOTS TP Base 7 3

spotsDOC SPOTS Documentation 8 2

spotsCL SPOTS Client Optional 1

If Real Time is applicable then the following applies:

Order of Order of
Package Short
Package Long Name Package Package
Name
Installation Uninstallation
spotsTKmodule SPOTS Perl TK Module
1 13

spotsNS SPOTS Naming Server 2 12

spotsDB SPOTS Database 3 11

spotsDS SPOTS Database Server 4 10


spotsJRE SPOTS JRE
5 9

spotsAS SPOTS Application Server 6 8

174 E200613-01-115-V14.0I-34
spotsTPBase SPOTS TP Base 7 7

spotsDOC SPOTS Documentation 8 6

spotsRTDB SPOTS Real-Time Database 9 5

spotsRTS SPOTS Real-Time Server 10 4

spotsRTA SPOTS RT Agency 11 3

spotsSAA SPOTS SNMP Agent 12 2

spotsCL SPOTS Client Optional 1

For Application Server installations: (Large configurations)

Order of Order of
Package Short
Package Long Name Package Package
Name
Installation Uninstallation
spotsTKmodule SPOTS Perl TK Module
1 6
spotsJRE SPOTS JRE
2 5

spotsAS SPOTS Application Server 3 4

spotsTPBase SPOTS TP Base 4 3

spotsDOC SPOTS Documentation 5 2

spotsCL SPOTS Client Optional 1

If Real-Time applies, install Real-Time packages by the following order:

Order of Order of
Package Short
Package Long Name Package Package
Name
Installation Uninstallation
spotsTKmodule SPOTS Perl TK Module
1 9
spotsJRE SPOTS JRE
2 8

spotsAS SPOTS Application Server 3 7

spotsTPBase SPOTS TP Base 4 6

spotsDOC SPOTS Documentation 5 5

spotsRTS SPOTS Real-Time Server 6 4

E200613-01-115-V14.0I-34 175
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

spotsRTA SPOTS RT Agency 7 3

spotsSAA SPOTS SNMP Agent 8 2

spotsCL SPOTS Client Optional 1

For Database Server installations: (Large configurations)

Install Long-Term packages by the following order:

Order of Order of
Package Short
Package Long Name Package Package
Name
Installation Uninstallation
spotsTKmodule SPOTS Perl TK Module
1 6

spotsNS SPOTS Naming Server 2 5

spotsDB SPOTS Database 3 4

spotsDS SPOTS Database Server 4 3

spotsDOC SPOTS Documentation 5 2

spotsCL SPOTS Client Optional (*) 1

If Real-Time applies, install Real-Time packages by the following order:

Order of Order of
Package Short
Package Long Name Package Package
Name
Installation Uninstallation
spotsTKmodule SPOTS Perl TK Module
1 7

spotsNS SPOTS Naming Server 2 6

spotsDB SPOTS Database 3 5

spotsDS SPOTS Database Server 4 4

spotsDOC SPOTS Documentation 5 3

spotsRTDB SPOTS Real-Time Database 6 2

spotsCL SPOTS Client Optional (*) 1

176 E200613-01-115-V14.0I-34
Notes:

 The package spotsCL is installed on the SPOTS Server to make use of the SPOTS
Client on Unix. Please note that it is recommended to use the SPOTS Client on
Windows instead.

 If spotsCL is to be installed on Unix, it must be explicitely selected in the menu.

 The package spotsRTA is installed on the SPOTS Server to make use of the SPOTS
Real-Time Agency on Unix.

 The package spotsSAA is applicable in case the SPOTS SNMP Alarm Agent
functionality is desired.

 (*) In this case spotsJRE was not previously installed, it will be automatically installed
before spotsCL

The next tables present the possible configurations for installating the spots add-ons packages
(during SPOTS server installation), stating when it is applicable to a Standard Instalation or for a
Customized Installation:

Active Warnings, System Monitor and Administration Console:

Order of Order of
Package
Package Long Name Package Package
Short Name
Installation Uninstallation

spotsAWP SPOTS Active Warnings Proxy 1 3

spotsSYSM SPOTS System Monitor 2 2

spotsADM SPOTS Administration Console 3 1

Applicable to Standard Installations (all packages are installed).

Active Warnings and System Monitor:

Order of Order of
Package
Package Long Name Package Package
Short Name
Installation Uninstallation

spotsAWP SPOTS Active Warnings Proxy 1 2

spotsSYSM SPOTS System Monitor 2 1

Applicable to Customized Installations.

E200613-01-115-V14.0I-34 177
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Active Warnings and Administration Console:

Order of Order of
Package
Package Long Name Package Package
Short Name
Installation Uninstallation

spotsAWP SPOTS Active Warnings Proxy 1 2

spotsADM SPOTS Administration Console 2 1

Applicable to Customized Installations.

System Monitor:

Order of Order of
Package
Package Long Name Package Package
Short Name
Installation Uninstallation

spotsAWP SPOTS Active Warnings Proxy 1 1

Applicable to Customized Installations.

178 E200613-01-115-V14.0I-34
9.4 Installing SPOTS Software V14.0
The SPOTS packages installation is based on a Menu driven installation.
This menu driven installation provides a set of options for standard SPOTS configurations where
the set of packages to install and the associated values and parameters is pre-defined accordingly
to installation rules for standard configuration.

 Login as root user.

 Insert the SPOTS Performance Management V14 DVD.

 Execute the following command:


# /cdrom/cdrom0/spots_installer

 As a result, the spots_installer menu is displayed, prompting for the SPOTS Installation Type.
INFO: On nearly every point in the installation menu it is possible to use the keys:
[ Q ] + Enter to exit the Installation Menu
[ I ] + Enter to list detailed informations on the presented options
[ B ] + Enter to browse backwards through the menus

Please select [1] for Standard Installation.

E200613-01-115-V14.0I-34 179
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Select [1] for Single Server option

For example select option 4 for a Single Server installation with Real Time & AddOns

180 E200613-01-115-V14.0I-34
For more information about which SPOTS components are behind the presented installation types,
please refer to Chapter 9.3 - SPOTS software - Choice of packages for V14.0 Core.

Choose the database size, in this example Medium [3]

E200613-01-115-V14.0I-34 181
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Enter the SPOTS database passwd, when asked confirm it

The summary screen presents a listing of all the default values which will be used to install
SPOTS V14 Single Server + Real Time & AddOns.
By entering [ S ] + Enter the installation process will start.

 You can follow the installation progress on the screen. Detailed information about the
installation progress you can find in the logfile /var/tmp/install.log

9.4.1 Installing SPOTS-PMC in Solaris environment

The installation of SPOTS-PMC is not included in the available SPOTS Server standard installation,
but must be done explicitely by the user.
The steps to follow for installing SPOTS-PMC in Solaris environment are similar to those followed
by a SPOTS Server installation, the only diference is on the options selected. .

 Login as root user.

 Insert the SPOTS Performance Management V14 DVD.

 Execute the following command:


# /cdrom/cdrom0/spots_installer

182 E200613-01-115-V14.0I-34
 As a result, the spots_installer menu is displayed, prompting for the SPOTS Installation Type.
INFO: On nearly every point in the installation menu it is possible to use the keys:
[ Q ] + Enter to exit the Installation Menu
[ I ] + Enter to list detailed informations on the presented options
[ B ] + Enter to browse backwards through the menus

Please select [1] for Standard Installation.

Note: Customized Installation can also be selected and the option to install SPOTS-PMC is
also available.

E200613-01-115-V14.0I-34 183
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Select [4] for SPOTS Client option

The next window is a summary screen that presents a listing of all the default values which will be
used to install SPOTS V14 SPOTS Client.

By entering [ S ] + Enter the installation process will start.

 You can follow the installation progress on the screen. Detailed information about the
installation progress can be found in the logfile /var/tmp/install.log

184 E200613-01-115-V14.0I-34
9.5 System configuration issues
This section describes the final steps required after the installation of SPOTS SW Packages.

 Login as root user.

 Make sure that the DNS is adequately configured to allow that the host name and fully
qualified host name for all SPOTS hosts (including the local host and all other PMS or PMC
hosts existing in the SPOTS installation) can be correctly resolved into their respective IP
addresses.
If the DNS is not configured to allow this, it is necessary at least to create aliases for the host
names and fully qualified host names of all SPOTS hosts, in file "/etc/hosts". Edit this file
and include an alias for each SPOTS host.
 In the file "/etc/hosts" there will be a line where the fully qualified hostname is
equal to ‘loghost’ as for example:
#
# Internet host table
#
127.0.0.1 localhost
141.29.139.103 winter loghost
 DO NOT delete this line.
Each alias consists of a line with the following format:
<IP Address> <local or remote hostname> <fully qualified hostname>
For example:
141.29.139.12 spotshost spotshost.nsn.com

 Edit the file "/usr/dt/config/Xconfig" and set the property "Dtlogin*authorize" to "False".

 The spots user is automatically created during installation. However, it will be locked until a
password is assigned to it.

 Assign a password to the spots user, executing the following shell command:
# passwd –r files spots
The command will request a new password for the user and its confirmation.

 If you have installed Fault Tolerance with Disk Mirroring and wish that the user spots (and/or
other users) also receive the Email disk failure notification, you must include the user spots
email address (and/or other users addresses) in the address list as described in Section
6.2.1.2.

 Remove the SPOTS DVD from drive.

 Shut down the system, entering the shell command:


# /etc/shutdown -y -i6 –g0

 Wait for the system to reboot.

E200613-01-115-V14.0I-34 185
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

9.6 SPOTS Licensing Software

The SPOTS Licensing Software is a mandatory component of the SPOTS system and it controls
the access to SPOTS and its functionality.
The following licensed features exist in SPOTS V14:

• Feature 1 (RT/Online)

• Feature 2 (SAA)

These features are licensed with the SPOTS Licensing Key, ordered from Nokia Siemens Networks
prior to SPOTS Installation (see Section 3.1.1).
Additionally to the SPOTS Licensing Key, one TP Licensing Key must be ordered from Nokia
Siemens Networks for each installed TP (see Section 3.1.1).
The SPOTS Licensing Key and the TP Licensing Keys must be installed (according to Section
9.6.1 below) on each system that contains either a Single Server installation (Small and Medium
configurations) or a Application Server installation (Large configurations).

9.6.1 Installing a license

 Login on the SPOTS server as spots user.

 Execute the following command:


$ $SPOTS_DIR/bin/spotslicense -i <license string from Nokia
Siemens Networks>

 After installing a license the SPOTS services should be restarted. See chapter
4 for more details.

9.6.2 Dumping the installed licenses

 Login on the SPOTS PMS server as spots user.

 Verify if the $SPOTS_DIR variable is set (see Annex 1).

 Execute the following command:


$ $SPOTS_DIR/bin/spotslicense -l

9.6.3 Removing the installed licenses

 Login on the SPOTS PMS server as spots user.

 Execute the following command to remove the SPOTS license:


$ $SPOTS_DIR/bin/spotslicense -r SPOTS

186 E200613-01-115-V14.0I-34
 Execute the following command to remove a TP license:
$ $SPOTS_DIR/bin/spotslicense -r <TP name>

E200613-01-115-V14.0I-34 187
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

9.7 Real-Time Configuration issues


The configuration procedures described in the next sections must be performed, after installation of
the SPOTS RT Software, in order to ensure proper operation of the application.

 See Annex 4 for detailed information and best practices of SPOTS-RT configuration.

9.7.1 Configuring a Distributed SPOTS Environment with Real-Time


In a Distributed installation it is necessary to execute the following steps:

 Login as spots user in the Application Server (AS) machine.

 Copy the file $SPOTS_DIR/sdb.cfg from the Database Server (DS) machine to the
same directory in the Application Server (AS) machine.

 Edit the file $SPOTS_CL_DIR/config/spots_configuration.properties and add the


following parameters:
RTS_ADDRESS = <IP or hostname of the RTS server>
RTS_PORT = 50061
APM_PORT = 50005

 Edit the $SPOTS_DIR/server_rt/properties/MonitorServer.properties file and insert


the IP address for the database server:

database.hostname = <IP of the database server>

9.7.2 Configuring SPOTS RT Agency Software (Solaris environment)

In this SPOTS version there can be some RT Agents running in the server machine and others
running on separate machines, for load distribution. However it is advisable to have all RT Agents
running on a separate Windows machine.

9.7.2.1 Configuring real_time.cfg files

For each RT Agent Types running on Solaris system, a specific real_time.cfg file must be
created and configured.
 If this file does not exist for a specific RT Agent type the system will assume that the specific
RT Agents are running on Windows system.

After the SPOTS LT installation the SPOTS_DATA environment variable is defined. For further
information refer to Annex 1.

188 E200613-01-115-V14.0I-34
Under $SPOTS_DATA/traffic_data/ there is a directory for each agent type according to the
following table:

Agent types Files directory


CS /cyclic
ST /sctc_cyclic
HI /hlri
Q3 /q3dc
SNMP /spr
UTRAN /upf
BR_OMCB /exp
BR_RC /ascii
GGSN /ggsn
MSP /msp
Inside each one of these directories there is a /cfg sub-directory where it could be necessary to
create a configuration file named "real_time.cfg". The parameters inside this file are used by the
loader command to transfer the appropriate agent input files to the right directory.

The real_time.cfg file syntax consists in one or more lines with the following format:
agency_dir agency_name file_name_filter
where:
agency_dir The RTA installation directory.
In case in the RTA installation the default path was accepted this
installation directory is /opt/spots-rta. Otherwise it is the installation path
selected by the user.
agency_name The agency name defined in the PMC.
Logical expression that defines a file name filter for the files to transfer.
file_name_filter
Only the files whose names satisfy the file name filter expression are
transferred by the loader to the agent input running in the agency defined
by agency_name.

Note: The file name filter shouldn’t contain spaces, and it should be
surrounded with quotes.

There are several operators that can be used in the file name filter
expression:

Operator Usage Returns true if


! !op op is false
& op1&op2 op1 and op2 are both true, conditionally
evaluates op2
| op1|op2 either op1 or op2 is true, conditionally
evaluates op2

Different wildcards are supported:

Wildcard Description
Symbol
* Substitutes zero or more characters.
? Substitutes one character.

E200613-01-115-V14.0I-34 189
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Follows a real_time.cfg example for the HI Agent Type:


/opt/spots-rta Q_Agency_1 'sc*NE_1*'
/opt/spots-rta Q_Agency_2 'sc*NE_2*'
In this example two agencies (Q_Agency_1 and Q_Agency_2) are used to process HI data files
(both agencies must be registered in the PMC). With the file name filters defined, files from different
network elements collecting HI data (NE1 and NE2) will be transferred to different agents running in
different agencies.
In case SPOTS processes only few data types there can be an advantage in split the data
processing of the same type among different agencies to benefit from the multiprocessing
capabilities of the machine used.

Below a real_time.cfg example for the CS Agent Type:


/opt/spots-rta H_Agency 'CS*'

 Edit all the real_time.cfg files and insert the parameters according to the appropriate RTA
installation directory, the agency names defined in the PMC (refer to [1] for details registering
Agencies) and the file name filters for the files to transfer for each agent type.

9.7.3 Modifying the RT Agencies default memory (if desired)

 The default maximum memory available for each RTAgency is 256MB.


 If the size of the network is large and the RTAs require more memory to process the data, try
changing the max memory value to 512MB and see if it is enough.
To do this, edit the file:
/opt/spots-rta/james/profiles/service.properties
and change the text in bold:
# Java Home
jdk.home = ../jre/bin/java –d64 –Drtagency -Xmx512m
Make sure that the Spots machine has enough memory.
The RTAs work in parallel with all other spots processes!
 After modifying the file you should stop and start the RT Agency and also the SPOTS RT
services in the following order:

 Login as spots user

 Stop the RT Agency with the command


/etc/init.d/ initSpotsAgency stop

 Start the RT Agency with the command

/etc/init.d/ initSpotsAgency stop

190 E200613-01-115-V14.0I-34
 Stop and start the SPOTS Real Time services as described in 4-Starting and
stopping SPOTS.

E200613-01-115-V14.0I-34 191
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

9.7.4 Modifying the MonitorServer default memory (if desired)


 If the size of the network is large and the MonitorServer require more memory to process the
data, try changing a more appropriate memory value, e.g. 512M.
To do this, edit the file:
$SPOTS_DIR/server_rt/monitorserver
and change the text in bold:
$JAVA –Drtmonitor –Xmx256m

9.7.5 Configuring the events gateway file

The alarms supported by SPOTS RT are generated by the RT Agents. These alarms are sent to
the SPOTS Application Server via a UDP port to the SPOTS Event Gateway.
In the directory defined by the environment variable SPOTS_DIR (for further information refer to
Annex 1.) there is the events gateway configuration file: egw.cfg. This file has the syntax
exemplified below:
# Events Gateway Configuration File
#
# Syntax:
# DatabaseServerIDName=UDP_PORT_NUMBER
#
# Ex:
# DS@spots.nsn.com=10000
DS@cimbalino=10000
 Edit the egw.cfg file to specify the database to use to store the alarm’s information.
 For the changes to the egw.cfg file to take effect you should stop and start the SPOTS
services as described in 4-Starting and stopping SPOTS.

There is one non-commented line (not starting with #), which identifies the database where alarms
are stored and the UDP port that the Application Server listens for alarms.
The DatabaseServerIDName used has to be the same as the name given to the property
“ServerID” on the SDS configuration file “sds.cfg” – see Annex 3. If no “ServerID” property is
declared on a SDS, the used SDS ID shall be "DS@<SDS host>", where "<SDS host>" is the own
host name configured on the SDS system (either a "simple" UnixTM host name, e.g. "machineA", or
a "fully qualified domain name", e.g. "machineA.nsn.com").
In the example, the database name is identified with DS@cimbalino and the UDP port number is
10000.
For the alarms there is no need to configure the UDP port number on the RT Agent. When the user
configures one RT Agent its domain is configured and thus internally SPOTS passes the related
UDP port information from the egw.cfg file to the RT Agent.

192 E200613-01-115-V14.0I-34
9.7.6 Stop the SNMP Agent in Solaris

 The configuration actions described in this section only apply if the spotsSAA
component is installed.
In order to receive the SPOTS’s alarms the Solaris native SNMP Agent in Solaris must be stopped.
To stop the SNMP Agent you must run the following commands:

 Login as user root.

 Execute the following commands:

# svcadm disable snmpdx


# svcadm disable sma
# svcadm disable dmi

9.7.7 Connecting SAA to an external Fault Management application

 The configuration actions described in this section only apply if the spotsSAA
component is installed.
In order to forward alarm events to an external Fault Management application (e.g. TeMIP), so that
they can be displayed in the corresponding graphical user interface, it is necessary to complete
some configuration steps.
After SAA installation is completed use spotssnmpadmin application, as described below, to
configure the connection between SPOTS SAA and the external Fault Management application.

 Login as user root.

 Execute the SPOTS SAA Administration application:


#. /etc/saawd.env
# spotssnmpadmin

 The application displays the list of available options, for further information on the SAA
Administration application please refer to the SPOTS User Manual [1] or choose Help
option.
SPOTS SNMP Alarm Agent Administration

Available options:

1. Configure alarm trap destinations and SNMP protocol


version
2. Count all pending alarms
3. Visualize all pending alarms
4. Visualize pending alarms with class ...
5. Visualize pending alarms with instance ...
6. Visualize pending alarms with triggered threshold ...
7. Visualize pending alarms with event time ...
8. Clear all pending alarms
9. Clear pending alarms with class ...
10. Clear pending alarms with instance ...

E200613-01-115-V14.0I-34 193
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

11. Clear pending alarms with triggered threshold ...


12. Clear pending alarms with event time ...
13. SAA Processes Administration
14. SAA Filtering Administration
15. Help
16. Exit

Please select your choice >>

 Choose Configure alarm trap destinations and SNMP protocol version option:
“1”

 As a result, the following screen is displayed


SPOTS SNMP Alarm Agent - Trap Destination

Entry Nr.| SNMP Protocol | IP Address


-----------------------------------------------
1 SNMPv2c 127.0.0.1

SPOTS SNMP Management

1. Delete entry
2. Add entry
3. Save and exit
4. Exit without saving

Select your option >>

 Choose Add entry option:


“2”
 Enter the SNMP protocol version to be used, supported versions are SNMPv1,
SNMPv2c and SNMPv3, for example:
“SNMPv1”
 Enter the IP address of the external Fault Management application, for example:
“188.102.4.43”

 If necessary, delete any existing entry that is unwanted. To do it, select the Delete
entry option:
“1”
 Enter the number of the entry to be deleted, for example:
“1”
 Inspect the output in order to verify that the configuration parameters are correct.

 After correct configuration, confirm the inserted data with Save and exit option:
“3”
<Press any key to continue>

 Changes will only take effect in the next SAA startup

194 E200613-01-115-V14.0I-34
 Back in the main menu, exit from SAA Administration application, choosing the Exit option.

 SAA connected to the external FM application.

9.7.8 Multiple Ethernet Cards on the Same Machine

In some cases, the machine acting as a RT Server has multiple Ethernet cards installed, thus has
multiple IP numbers for the same machine (if this is not your case, please skip this section).
The RT Server component of SPOTS selects one of the available IP numbers for communication
with the RT Agencies and the PMC Client. However, due to the way that your network is configured,
the RT system may not work properly with the selected IP.
To overcome this problem you can manually specify which IP number you want RT Server to use.
The procedure is the following (assuming the RTS installation path is the default one i.e.
/opt/spots-pms/server_rt):
1. Stop the RT processes according to Section 4.1.2. Look for the directory /opt/spots-
pms/server_rt/james/profiles/Managers of the RT Server machine, and delete all its contents.
2. You must edit the manager.properties file on the /opt/spots-pms/server_rt/james/profiles
directory of your RT Server machine, and add the line:
manager.ip = <ip_that_rt_should_use>
3. You must edit the apm.properties file on the /opt/spots-pms/server_rt/apm/properties directory
of your RT Server machine, and add the line:
java.rmi.server.hostname = <rt_server_hostname>
4. You must edit the MonitorServer.properties file on the /opt/spots-pms/server_rt/properties
directory of your RT Server machine, and add the line:
java.rmi.server.hostname = <rt_server_hostname>
5. Start the RT processes according to Section 4.2.2. You can, in alternative, restart the RT
Server machine.

Note: If you have already registered RT Agencies before you made this modifications, please
unregister them and delete the contents of the following directory (assuming the RTA installation
path is the default one i.e. /opt/spots-rta):
/opt/spots-rta/james/profiles/Agencies.
Then you need to register the agencies again through the PMC Client.

E200613-01-115-V14.0I-34 195
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

10 Installation of SPOTS V14 Software (Windows environment )

10.1 Installing SPOTS-PMC

 You must be logged on as an administrator or a member of the Administrators group in order


to perform this installation.

 Within the Windows environment, it is assumed that the DVD drive letter is “D”. If not, you
shall use the correct letter instead.
 There is no facility to upgrade the SPOTS-PMC product. It is mandatory to de-install the older
version and install the current one. To de-install the previous SPOTS-PMC version, refer to
Chapter 14.
The installation process comprises several steps, which are described in the remainder of this
chapter.

10.1.1 Installation sequence

 Close all other applications before proceeding.


If you have some other windows opened, close then all prior to proceed. You can switch to
the other opened windows by pressing simultaneously ALT and TAB keys on your
keyboard.
The installation steps of SPOTS-PMC are:

 Login to the computer.

 Insert SPOTS Performance Management V14.0 CoreDVD in the DVD drive.

 Double click on the Spots.exe application at the directory \windows\StartUp of the DVD.

196 E200613-01-115-V14.0I-34
 Click on the PMC Install button.

 The Welcome window is presented - click on the ‘Next >’ button.

 During the following steps, it is possible to interrupt the installation process by clicking on the
‘Cancel’ button, located on the lower right corner of each dialog. Additionally, it is possible to
return to the previous window using the ‘< Back’ button.

E200613-01-115-V14.0I-34 197
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 The End-User License Agreement is presented. Read carefully all its terms.

198 E200613-01-115-V14.0I-34
 Click on the ‘Yes’ button if you accept all the terms contained in the EULA. Otherwise,
terminate the installation, clicking on the ‘No’ button.

 In the next dialog the required type of installation shall be selected.

 This window will only appear if the user has administration privileges. For non-
administrator users, the “Personal” option will be installed.

E200613-01-115-V14.0I-34 199
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 There are two installation types available:


Global all Windows users have SPOTS PMC available from the SPOTS Performance
Management folder in the Windows Start menu;
Personal only current user, who installed SPOTS PMC, has it available from the SPOTS
Performance Management folder in the Windows Start menu.

 Click on the ‘Next >’ button.

 Confirm the location for the SPOTS PMC software and click on the ‘Next >’ button. The
default SPOTS PMC installation directory is presented. Whenever a different location is
desired, use the Browse button to select it

 In the next dialog the SPOTS Naming Server connection parameters are specified.

 In all edit fields in this window, the user must repeat the typing of the first character
that is entered.

200 E200613-01-115-V14.0I-34
 The ‘Use local’ check box is disabled as the current version of the SPOTS PMS was not
released for the Windows environment.
The SPOTS Naming Server connection parameters are:
 Identification of the system where SPOTS Naming Server is installed
There are two methods to specify the system where SPOTS Naming Server is installed –
its fully qualified hostname (hostname/domain combination):

 click on the Hostname radio button;

 specify in the following text boxes the hostname and the domain of the system
where the SPOTS Naming Server package (or component) was installed (e.g.
spots and mycompany.com).
or its IP Address:

 click on the IP Address radio button;

 specify in the following text boxes the IP address of the system where the SPOTS
Naming Server package (or component) was installed (e.g. 141.99.130.99).
The former method is mandatory when the server IP Address is dynamically assigned.
 Independently of using an hostname/domain combination or an IP Address, the
specified location must be reachable via the use of the ping command. (e.g. ping
spots.mycompany.com or ping 141.99.130.99).
 TCP/IP port number used by SPOTS Naming Server for communication

E200613-01-115-V14.0I-34 201
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

The default TCP/IP port number is presented. It is possible to select a different port number,
when the default one is already being used by another application. When the value is
modified by mistake, it can be set to its default value using the the ‘Set Default Port’ button

 Click on the ‘Start >’ button to begin the product installation. It is only enabled when all
required data fields are correctly specified.

 SPOTS-PMC software is being installed - please wait.

202 E200613-01-115-V14.0I-34
 Installation is complete. Click on the Finish button.

 PMC installation is completed.


To start using the installed components, select the Programs folder, select SPOTS Performance
Management folder and click on one of the existing options:

10.1.2 Troubleshooting
When SCL cannot establish a session with its corresponding SN, a window is presented with the
following message:

Unable to contact naming server.


Fatal.

In such a situation, it is required to execute the tool SpotsPing command from the bin subdirectory
of the SCL installation. To do so:

 Open a DOS command prompt window

 Admitting that SPOTS Client was installed on c:\SPOTS-PMC, type the following command:
cd /d c:\SPOTS-PMC\bin

 Run the SpotsPing tool, executing the command:

E200613-01-115-V14.0I-34 203
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

SpotsPing NS <address> [<TCP/IP port>]


where:
<address>
is the SNS address, its hostname or the TCP/IP address.
<TCP/IP port>
is the SNS TCP/IP port number (optional if SNS has been configured with the
default value “50000”).
If the SNS is detected, the following message is presented:

Server detected!!!

Otherwise (SNS is not detected),

Unable to contact the server.

 Close the DOS command window with the Exit command.

10.1.3 SPOTS License checking


During login, SPOTS-PMC establishes a connection to a user-defined SAS, for which a valid
license must exist.
If the license does not exist or is invalid, a message is presented:

License not available.

In such cases, it is mandatory to obtain a license for the corresponding SAS.


To obtain further information concerning licensing, refer to Section 3.1.1.

204 E200613-01-115-V14.0I-34
10.2 Installing SPOTS DOC Software
The SPOTS Documentation can be also installed in a Windows Environment.
The installation steps of SPOTS-DOC are:

 Login to the computer.

 Double click on the Spots.exe application at the directory c:\V14_Win_Install\StartUp of the


computer.

 Click on the DOC Install button.

E200613-01-115-V14.0I-34 205
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 The SPOTS-DOC Welcome window is presented - click on the Next > button.

 Read and accept the license agreement by clicking on the YES button.

206 E200613-01-115-V14.0I-34
 Confirm the location for the SPOTS-DOC software and click on the Next > button.

 Select the setup type, in this case choose a custom installation, and click on the Next >
button.

E200613-01-115-V14.0I-34 207
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Confirm the location for the SPOTS-DOC software and click on the Next > button.

 SPOTS-DOC is being installed - please wait.

208 E200613-01-115-V14.0I-34
 Installation is complete. Click on the Finish button.

 SPOTS Documentation installed in Windows environment.

E200613-01-115-V14.0I-34 209
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

11 Technology Plug-Ins (TPs)

After installing/upgrading SPOTS-PMS and PMC, it is mandatory to install the Technology Plug-Ins
associated with the existing software versions of each managed Network Element.

 Before installing the Technology Plug-Ins you must read carefully the TPs
Release Notes [4] for important information regarding the TPs installation,
specially if you have a set of TPs already installed and you are performing a TP
upgrade.

11.1 Documentation
For more detailed information concerning Technology Plug-Ins, see:
• The “Technology Plug-Ins” chapter of the User Manual;

• The TPs documentation, is included on the TPs distribution DVD (Technology Plug-Ins
for Solaris) in HTML format. It can be viewed with a regular web browser by opening
the file:
/cdrom/cdrom0/Doc/TpDocStart.htm
• The TPs documentation, is also installed on the server machine when a TP is installed
and it can be viewed with a regular web browser by opening the file:
$SPOTS_DIR/public/Doc/tps/TpDocStart.htm

11.2 Installation / Upgrade / Uninstallation

 Before upgrading a TP it is necessary to export the user-defined extended fields. To see how
to export an extend field check the User Manual, chapter 5.2.15.

 For installing / upgrading / uninstalling the Technology Plug-Ins you must run the following
command as user ‘spots’:
$ tpfspots

 This will start the SPOTS-TP’s FrameWork window.

 If the following message appears, after running the tpfspots command, it does not affect
any functionality of the TP framework. It only means that the log4j logging framework isn’t being
able to contact the Active Warnings Proxy.
log4cxx: Could not connect to remote log4cxx server at [<awp_hostname>]. We will try
again later.
log4cxx: ConnectException: Error 0

210 E200613-01-115-V14.0I-34
In case the Active Warnings Proxy add-on is installed, it must be properly configured In the
log4j_spots.properties file and must be running. On the other hand, if it is not installed, the
log4j_spots.properties file must be checked for references to the Active Warnings proxy add-on,
and these references should be removed or commented in order to avoid this warning.

 When installing / upgrading TPs:


After the installation you must execute the following steps:

 Install a valid license for the installed TPs. Please refer to chapter 9.6 - SPOTS
Licensing Software.

 Stop all spots services as is described in section 4.1 - Stopping SPOTS.


The Real Time daemons are stopped, the sas, sds and sns Long Term processes are
stopped.

 Start all spots services as is described in section 4.2 - Starting SPOTS.


The sas, sds and sns Long Term processes are started, the Real Time daemons are
started.

 For more detailed information regarding TPs installation / upgrade / uninstallation please
consult the documentation referred in section 11.1 - Documentation.

11.3 NMS Configuration


The SPOTS system collects the PM files produced by the managed Network Elements from the
Network Manager Systems (NMS) in the network.
The SPOTS V14 version is able to inter-work with a set of NMS types which is determined by the
set of installed TPs.
Specific configuration tasks must be performed, in order to guarantee inter-operability with the
interfaced NMS systems.
These tasks include:
 declaring the interfaced NMS Systems in file
“$SPOTS_DIR/data/element_managers.cfg”
(one line per interfaced system - see TP Documentation for the syntax)
 performing any other configuration tasks specific of the interfaced NMS types (see TP
Documentation )

E200613-01-115-V14.0I-34 211
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

12 Modifying a SPOTS V14 Installation (Windows environment)

12.1 SPOTS PMC


The installed SPOTS PMC components can be changed during the product lifecycle.
 Only the user who has installed SPOTS PMC is allowed to modify the SPOTS PMC
installation.

 Click the Start button and select Settings >> Control Panel.

212 E200613-01-115-V14.0I-34
 On the Control Panel folder, double click on the Add/Remove Programs icon.

 Select the SPOTS-PMC software and click on the Change/Remove button.

E200613-01-115-V14.0I-34 213
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Click on the Modify radio button and click on the Next> button.

 Use the checkbox on the left side of each feature to specify whether it shall be installed
(checked) or not installed (cleared). Terminate selection by clicking on the Next> button.

214 E200613-01-115-V14.0I-34
 The last dialog allows modifying the SNS Connection Parameters, as detailed within the
SPOTS PMC installation chapter. Click on the Start button to proceed with the required
product modifications.

 SPOTS-PMC installation is being modified - please wait.

E200613-01-115-V14.0I-34 215
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 The modification of SPOTS PMC is complete. Click on the Finish button.

12.2 SPOTS DOC


The installed SPOTS DOC components can be changed during the product lifecycle.
 Only the user who has installed SPOTS DOC is allowed to modify the SPOTS DOC
installation.

216 E200613-01-115-V14.0I-34
 Click the Start button and select Settings >> Control Panel.

 On the Control Panel folder, double click on the Add/Remove Programs icon.

E200613-01-115-V14.0I-34 217
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Select the SPOTS-DOC software and click on the Change/Remove button.

 Click on the Modify radio button and click on the Next> button.

218 E200613-01-115-V14.0I-34
 Use the checkbox on the left side of each feature to specify whether it shall be installed
(checked) or not installed (cleared). Terminate selection by clicking on the Next> button.

 SPOTS-DOC installation is being modified - please wait.

E200613-01-115-V14.0I-34 219
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 The modification of SPOTS DOC is complete. Click on the Finish button.

220 E200613-01-115-V14.0I-34
13 Updating SPOTS Software (Windows environment)

Every time a new delivery of a SPOTS product occurs for the Windows environment, the existing
installations shall be updated, in order to guarantee that all customers have the most recent version
of the product.

13.1 Updating SPOTS PMC


 Only the user who has installed SPOTS PMC is allowed to update the installation.
 Close all other applications before proceeding.
If you have some other windows opened, close then all prior to proceed. You can switch to
the other opened windows by pressing simultaneously ALT and TAB keys on your
keyboard.

 Login to the computer.

 Insert SPOTS Performance Management V14.0 CoreDVD in the DVD drive.

 Double click on the Spots.exe application at the directory \windows\StartUp of the DVD.

 Click on the PMC Install button.

E200613-01-115-V14.0I-34 221
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 In the first step the SPOTS-PMC Resume window is presented - click on the Next> button.

 SPOTS-PMC software is being updated - please wait.

222 E200613-01-115-V14.0I-34
 All deliverable files that have been modified will be replaced by their corresponding new
versions. All files that were modified by the end-user after the original installation will be
restored.

 Update is complete. Click on the Finish button.

E200613-01-115-V14.0I-34 223
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

13.2 Updating SPOTS DOC


 Only the user who has installed SPOTS DOC is allowed to update the installation.
 Close all other applications before proceeding.
If you have some other windows opened, close then all prior to proceed. You can switch to
the other opened windows by pressing simultaneously ALT and TAB keys on your
keyboard.

 Double click on the Spots.exe application at the directory c:\V14_Win_Install\StartUp of the


computer.

 Click on the DOC Install button.

224 E200613-01-115-V14.0I-34
 In the first step the SPOTS-DOC Resume window is presented - click on the Next> button.

 SPOTS-DOC software is being updated - please wait.

E200613-01-115-V14.0I-34 225
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 All deliverable files that have been modified will be replaced by their corresponding new
versions. All files that were modified by the end-user after the original installation will be
restored.

 Update is complete. Click on the Finish button.

226 E200613-01-115-V14.0I-34
14 Uninstalling SPOTS Software (Solaris environment)

This section is valid for un-installing all installed PMS components on Solaris.

To remove the SPOTS SW, execute the following actions as described in the next sections:
1. Remove all SPOTS TPs (consult TP documentation as referred in section 11.1-
Documentation).

 Prior to initiate spotsRTA (Real-Time Agency) de-installation, it is mandatory that the


corresponding RT Agency has been stopped within the RT Administration (from SPOTS-
PMC). Perform this before stopping SPOTS applications. If this is ignored, it will be
impossible to re-install SPOTS-RTA afterwards. Detailed information concerning agency
administration is available in the SPOTS User Manual [1].
2. Stop SPOTS applications – see Section 4.1.
3. Remove all SPOTS packages according to the instructions in the section 14.1-Removing
SPOTS Packages.

E200613-01-115-V14.0I-34 227
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

14.1 Removing SPOTS Packages (V14.0 Core-Drop 2)


This action, which consists of removing the SPOTS packages installed in each host, can be
accomplished via the spots_installer menu (refer to Section 14.1.1).

 In a distributed environment, packages can be spread over multiple hosts;


consequently, the procedure described in the next sections shall be repeated for each
system hosting SPOTS components.

14.1.1 Removing SPOTS Packages with spots_installer

 Login as root user.

 Insert the SPOTS Performance Management V14.0 Core DVD.

 Execute the following command:


# /cdrom/cdrom0/spots_installer

 At the presented screen enter [ 4 ] + Return for removing SPOTS V14 Software.

 The status of the installed SPOTS components will be displayed. To remove a single
component, enter the correspondent number. Type [ CR ] + Return if you want a complete
uninstallation of SPOTS V14 software.

228 E200613-01-115-V14.0I-34
 Confirm all polls with “y”.

 Inspect the output in order to verify that all package removals are successful.

 Several files and directories were created during the execution of spotsPMS,
spotsSAA, spotsRTS, spotsPMC and spotsRTA, independently of the user, which are
not removed by the uninstaller, as they were not created during installation. The user
who uninstalls these packages shall also delete these directories and files:
Directories:
<SPOTS PMS Server base directory> (e.g. /opt/spots-pms/)
<SPOTS PMS Server traffic data directory> (e.g. /var/opt/spots-pms/)
<SPOTS PMC base directory> (e.g. /opt/spots-pmc/)
<SPOTS SAA base directory> (e.g. /opt/spots-saa/)
<SPOTS RT Server base directory> (e.g. /opt/spots-pms/server_rt/)
<SPOTS RT Server base directory> (e.g. /opt/spots-pms/server_rt/)
<SPOTS RT Agency base directory> (e.g. /opt/spots-rta/)

Files:
/etc/spotsenv
to guarantee that the initial situation on the system was restored.

 In the same way, to guarantee that the initial situation is restored, the user spots must
be removed (and also its home directory) along with the pmuser and pmadmin
groups.

 SPOTS V14 software de-installed.

E200613-01-115-V14.0I-34 229
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

14.2 Removing SPOTS Add-ons Packages (V14.0 Core-Drop 2)


This action, which consists of removing the SPOTS Add-ons packages installed in each host, can
be accomplished via the spots_installer menu (refer to Section 14.2.1).

 In a distributed environment, packages can be spread over multiple hosts;


consequently, the procedure described in the next sections shall be repeated for each
system hosting SPOTS components.

14.2.1 Removing SPOTS Add-ons Packages with spots_installer

 Login as root user.

 Insert the SPOTS Performance Management V14.0 Core-Drop 2 DVD.

 Execute the following command:


# /cdrom/cdrom0/spots_installer

 At the presented screen enter [ 3 ] + Return for removing SPOTS V14 Software.

230 E200613-01-115-V14.0I-34
 An status of the installed SPOTS components will be displayed. Select just to remove a
single component by entering the correspondent number or type [ CR ] + Return for a
complete uninstallation of SPOTS V14 software.

 Confirm all polls with “y”.


 Inspect the output in order to verify that all package removals are successful
 Several files and directories were created during the execution of spotsAWP,
spotsSYSM and spotsADM, independently of the user, which are not removed by the
uninstaller, as they were not created during installation. The user who uninstalls these
packages shall also delete these directories and files:
Directories:
<SPOTS PMS Server base directory> (e.g. /opt/spots-pms/)
<SPOTS Administration Console directory> (e.g. /var/opt/spots-admc/)

 SPOTS V14 Add-ons software de-installed.

E200613-01-115-V14.0I-34 231
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

15 Uninstalling SPOTS Software (Windows environment)

15.1 Uninstalling SPOTS PMC


 Only the user who has installed SPOTS PMC is allowed to de-install the product.
 Close all SPOTS PMC applications before proceeding.
If you have some other windows opened, close then all prior to proceed. You can switch to
the other opened windows by pressing simultaneously ALT and TAB keys on your
keyboard.

 Click the Start button and select Settings >> Control Panel.

232 E200613-01-115-V14.0I-34
 On the Control Panel folder, double click on the Add/Remove Programs icon.

 A window is presented containing a list of all products installed whose de-installation process
has been tailored for the Microsoft Windows environment.

 Select the SPOTS-PMC software and click on the Change/Remove button.

E200613-01-115-V14.0I-34 233
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Click on the Remove radio button and on the Next> button.

 Confirm product removal by clicking on the Ok button.

234 E200613-01-115-V14.0I-34
 SPOTS PMC is being de-installed - please wait.

 The removal of SPOTS PMC is complete. Click on the Finish button.

 During de-installation, the operator may be requested to allow removal of a file named
“Win32Printer.dll”, a shared file installed with SPOTS-PMC. However it might be used by

E200613-01-115-V14.0I-34 235
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

other applications. In such an event, other applications should have registered themselves as
users of that file. Sometimes this does not occur, thus every time a shared file is about to be
removed a similar message is presented.

 In case of doubt, do not remove the file. Its presence is harmless to the system, just
occupying disk space. Otherwise click on the Yes button to allow removal.

 SPOTS-PMC software removed. Reference to the SPOTS-PMC software is no longer


presented since the list of currently installed products was updated.

15.1.1 Files not removed during de-installation

Each PMC user may configure the application according to his/her personal preferences, which are
stored in several configuration and log files, in each user profile directory, by default:
“C:\Documents and Settings\<User Login Name>\NokiaSiemensNetworks” for Windows
2003 or XP
The files located under the directory Nokia Siemens Networks are not removed during PMC de-
installation, thus they shall be manually removed.
If a user wants to restore all his/her original settings, it is only required to delete these user-
dependent files while PMC remains installed.

236 E200613-01-115-V14.0I-34
15.2 Uninstalling SPOTS DOC
 Only the user who has installed SPOTS DOC is allowed to de-install the product.
 If you have some other windows opened, close then all prior to proceed. You can switch to
the other opened windows by pressing simultaneously ALT and TAB keys on your
keyboard.

 Click the Start button and select Settings >> Control Panel.

E200613-01-115-V14.0I-34 237
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 On the Control Panel folder, double click on the Add/Remove Programs icon.

 A window is presented containing a list of all products installed whose de-installation process
has been tailored for the Microsoft Windows environment.

 Select the SPOTS-DOC software and click on the Change/Remove button.

238 E200613-01-115-V14.0I-34
 Click on the Remove radio button and on the Next> button.

 Confirm product removal by clicking on the Ok button.

E200613-01-115-V14.0I-34 239
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 SPOTS DOC is being de-installed - please wait.

240 E200613-01-115-V14.0I-34
 The removal of SPOTS DOC is complete. Click on the Finish button.

 SPOTS-DOC software removed. Reference to the SPOTS-DOC software is no longer


presented since the list of currently installed products was updated.

E200613-01-115-V14.0I-34 241
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

16 Abbreviations

APS Application Program System.


BSS Base Station Subsystem.
BTS Base Transceiver Station.
BTSM BTS site Management.
CCNC Common Channel Network Control.
CDE Common Desktop Environment.
CFS Common File Store.
CGI Cell Global Identity.
CI Cell Identity (CGI’s component).
EDGE Enhanced Data rates for GSM Evolution.
EM Element Manager (SC, OMC-S, RC, OMC-B or @vantage Commander).
EN Equivalent Node.
GERAN GSM and EDGE Radio Access Network.
GSM Global System for Mobile communications.
HLR Home Location Register.
HLRi HLR-Innovation.
LAC Location Area Code (CGI’s component).
MSC Mobile Services Switching Centre.
NE Network Element.
NMS Network Management System, also known as “EM”.
O&M Operation and Maintenance.
OMC Operation and Maintenance Centre.
OMC-B Operation and Maintenance Centre for Nokia Siemens Networks SBS NEs.
OMC-S Operation and Maintenance Centre for Mobile-Core NEs.
OMS Operation and Maintenance System.
PDC Performance Data Collector.
PM Performance Management.
PMC Performance Management – Client Configuration.
PMS Performance Management – Server Configuration.
RAID Redundant Array of Independent Disks
RC Radio Commander.
RTA SPOTS Real Time Agency
RTS SPOTS Real Time Server
SAS SPOTS Application Server.
SBS Nokia Siemens Networks Base Station.
SC Switch Commander.
SCL SPOTS Client application.
SDS SPOTS Database Server.
SNS SPOTS Naming Server.
SOC Set of Counters.
SOO Set of Objects.
SOV Set of Variables.
SPOTS Support for Planning, Operation&Maintenance and Traffic Analysis.
SSNC Signalling System Network Control.
SW Software.
UMTS Universal Mobile Telecommunication System.
UTRAN UMTS Terrestrial Radio Access Network.

242 E200613-01-115-V14.0I-34
17 References

[1] User Manual


SPOTS V14
(SSA Doc E200401-01-114-V14.0I-*)

[2] Release Notes


SPOTS V14.0.x
(SSA Doc E200401-01-121-V14.0I-*)

[3] Installation and User Manual


SPOTS-BAR V4.0
(SSA Doc E200401-01-214-V14.0I-**)

[4] Release Notes


SPOTS V14.0.x TPs
(SSA Doc E200401-01-221-V14.0I-*)

E200613-01-115-V14.0I-34 243
Annex 1 – UNIX environment variables

E200613-01-115-V14.0I-34 245
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

UNIX environment variables

LD_LIBRARY_PATH
Path to ORACLE and SPOTS libraries.
PATH
Path for file search.
SPOTS_DIR
SPOTS base installation directory (default value: /opt/spots-pms).
SPOTS_DATA
SPOTS base directory for PM data collection from NMSs (default value: /var/opt/spots-pms).
TMP
SPOTS Directory for temporary files (default value: /tmp).
ORACLE_HOME
Oracle home directory (default value: /opt/oracle/product/9.2.0).
ORACLE_SID
Oracle SID for SPOTS database, valid input must be a string of letters and/or digits with a
length less than or equal to 4 (default value: spot).
TNS_ADMIN
Path to Oracle SQL*Net configuration files (value:
$ORACLE_HOME/network/spots_ora_admin).
SPOTS_SAA_DIR
SPOTS Alarm Agent (SAA) base installation directory (default value: /opt/spots-pms).
SR_LOG_DIR
SAA log directory (value: $SPOTS_SAA_DIR/log).
SR_AGT_CONF_DIR
SNMP Agent configuration directory (value: $SPOTS_SAA_DIR/config).
SR_MGR_CONF_DIR
SNMP Manager configuration directory (value: $SPOTS_SAA_DIR/config).

All variables, except the SAA related ones are defined, after SPOTS installation, in the file
“/etc/spotsenv”.
The SAA-related variables are defined in the file “/etc/saawd.env”.

To display the value of any of these environment variables, use the following procedure:

 Execute the following commands (this example applies to the case of the “spotsenv” file –
for the “saawd.env” file, simply replace the file name):
$ . /etc/spotsenv
$ echo $<environment_variable_name>

 The value of the environment variable is displayed.

246 E200613-01-115-V14.0I-34
 It is recommended not to change the value of environment variables.
If it should become imperatively needed to change any of them, follow the procedure
described below.
 Before changing environment variables, the SPOTS application must be stopped;
the new values shall become available after re-starting SPOTS.

 Login as spots user.

 Edit the file “/etc/spotsenv” or “/etc/saawd.env”, according to the variable in


question.

 If already defined, locate the variable definition and replace the associated value
by the new one. If not yet defined, create a new variable definition copying an
existing definition for any other variable and then changing the variable name and
value.

 Logout, in order to changes in your shell environment to take effect.

 The environment variable’s new value is set.

E200613-01-115-V14.0I-34 247
Annex 2 – Domains’ Configuration

E200613-01-115-V14.0I-34 249
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Domains’ Configuration

 This annex refers to concepts that are explained in the SPOTS User Manual ([1]),
Section 1.3 (Domains) — read this section before proceeding.

The configuration of domains is performed on a cluster basis. The cluster’s configuration, stored
in the file “$SPOTS_DIR/domain.cfg”, is associated with the related SAS.

The syntax of the file “domain.cfg" is the following in EBNF notation:

<domain_def> ::= domain <name> <database_server_id> <subdomains>;


<name> ::= string
<database_server_id> ::= string | <void>
<subdomains> ::= {<domain_def> <other_subdomains>} | <void>
<other_subdomains> ::= <domain_def> | <void>
<void> ::=

 The configuration of domains must be done “off-line”, this means that the SAS and
SDS(s) servers of the cluster must be shut down before any changes are done to the file
“domain.cfg”.
SNS does not need to be shut down during this process.

Below, some examples of “domain.cfg” files are presented.

Example 1:

domain Root;

Example 2:

domain Root SDS_1;

Example 3:

domain Root {
domain North SDS_1;
};

250 E200613-01-115-V14.0I-34
Example 4:

Domain Portugal {
domain North SDS_1;
domain South {
domain Lisbon SDS_2;
domain Alfragide SDS_3 {
domain Alfragide_North;
domain Algragide_South;
};
};
};

 It must be noticed that:


• There can only be one top domain per “domain.cfg” file.
The following example is wrong:
domain Portugal SDS_1;
domain Spain SDS_2;

• If the top domain is called “Root”, the cluster will be associated dynamically with the
domains of the remaining clusters.
• There can only exist one cluster with a top domain called “Root”.
• The sub-domains of a top domain called “Root” cannot be called “Root”.
• Domains cannot be called “domain”.
• The names of SDSs must be the same as the names given to the property “ServerID”
in the SDS’s configuration file “sds.cfg” (refer to Annex 3).
• Whenever a domain (or sub-domain) is associated to a SDS, then none of its sub-
domains can be associated to another SDS. The following “domain.cfg” example is
wrong:

domain Portugal {
domain North SDS_1 {
domain Porto SDS_2;
};
}

• Whenever a domain (or sub-domain) is associated to a SDS, then this domain can
have more than one level of sub-domains. The following “domain.cfg” example is
valid:
domain Portugal {
domain North SDS_1 {
domain Porto {
domain Ribeira;
};
};
}

E200613-01-115-V14.0I-34 251
Annex 3 – Server Configuration Files

E200613-01-115-V14.0I-34 253
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Server Configuration Files

The configuration of each SPOTS PMS component is stored in a file located under the directory
$SPOTS_DIR of server’s file system (exception to loader.cfg, located in $SPOTS_DIR/data).
The following configuration files are available:

PMS Component Filename


SPOTS Naming Server (SNS) sns.cfg
SPOTS Application Server (SAS) sas.cfg
loader.cfg
SPOTS Database Server (SDS) sds.cfg

Each file defines values for a set of server "properties". The list of all server properties is given
in the tables presented further on in this Annex, together with their default values.
The “*.cfg” file is automatically created upon the installation of the SPOTS PMS component,
with the property values provided during the installation procedure. For all remaining (i.e. not
declared) properties, the default values take effect.
If it is desired to modify the value of a property (i.e. either to change a property value already
declared, or to set a not yet declared property to a non-default value), edit the file according to
the syntax mentioned below, changing or adding the property in question, and restart the
corresponding PMS component.

 In order to restart a PMS component, proceed as follows:


Only for SAS:
As spots user, issue the SPOTS command “spotsShutdown -r” on the host where
the SAS is located.
For any PMS component (SAS included):
As spots user, issue the SPOTS commands “/etc/init.d/initSpotsPMS stop” and
“/etc/init.d/initSpotsPMS start”, in this order: all the (installed) PMS components are
restarted.
For the description of the SPOTS commands, refer to [1], Section 5.1.

The structure of the configuration file consists of a line per property with the following syntax:
<property>=<value>
where <property> is a property name and <value> is a string, a number or a boolean (true/false)
value.
All lines starting with the characters “#”,“;”, “[“ or “]” are considered comments, thus not
processed. Additionally, leading line spaces are ignored.
The name and the value of a property are case-insensitive. Example: “ServerId” and “serVerId”
have both the same meaning.

 All the properties whose values are filenames or directories must be defined with the full
pathname.

 In order to take full advantage of the system performance, make sure to adjust the
“LoaderThreads” property in “sas.cfg” file.

254 E200613-01-115-V14.0I-34
SPOTS Naming Server (file “sns.cfg”)

Property Type Description Default value


LogFile String Name of the Log file $SPOTS_DIR/log/sns.log
Flag for log-messages with
LogFileTimeStampGMT Boolean FALSE
time stamp in GMT
Fully qualified domain host
Fully qualified domain host
LocalHost String name or IP Address of the
name
local host
TCP IP port to be used by
TcpPort Number 50000
the SNS
Interval in seconds to “ping”
PingPeriod Number 120
other servers

SPOTS Application Server (file “sas.cfg”)

Property Type Description Default value


AS@<SAS host
ServerID String Server identifier
name>
Fully qualified domain host name or Fully qualified domain
LocalHost String
IP Address of the local host host name
TcpPort Number TCP IP port to be used by the SAS sns + 1
$SPOTS_DIR/log/sas
LogFile String Name of the Log file
.log
Flag for log-messages with time
LogFileTimeStampGMT Boolean FALSE
stamp in GMT
Fully qualified domain host name or Fully qualified domain
NamingServerHost String
IP Address of the associated SNS host name
NamingServerPort Number TCP IP port of the associated SNS 50000
Interval in seconds to “ping” other
PingPeriod Number 120
servers
Root of directory tree used for storing
PM files collected from the network.
$SPOTS_DATA/traffi
DataLoadDir String From this location, the collected files
c_data
are thereafter read by the loader
command for loading into DB.
PublicDir String Public directory $SPOTS_DIR
MaximumSimultaneousI Maximum number of simultaneous
Number 50
nvocationsPerService service invocations for all sessions.
Maximum number of simultaneous
MaximumSimultaneousI
Number service invocations for a given 10
nvocationsPerClient
session.
Minimum percentage of data, for the
aggregation interval, needed to
perform any Data Aggregation.
MinimumPercentageInte
Number Make sure to set the same value in 60
grationData
‘sas.cfg’ (used for example in Ad-Hoc
reports) and ‘sds.cfg’ (used for
example in mkhistory).
Maximum number of detailed days
numDetailedDays Number that can be requested for an ad-hoc 15
report.
90% of machine
physical memory, and
Specifies in Kb the maximum allowed the maximum allowed
SasMaximumSize Number
size for SAS. value is also 90% of
machine physical
memory.
Specifies in Kb the maximum size for
ReportMaximumSize Number 512 * 1024
each report.
CSVPrecision Number Specifies the number of significant 4

E200613-01-115-V14.0I-34 255
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Property Type Description Default value


digits used by reporter command
when outputting data to CSV files

The loading of traffic records marked


in the imported traffic files as
"suspect" is controlled by the value of
this property defined in SAS
configuration file.
RejectOnSuspectFlag Boolean TRUE
The possible values are "false" (load
suspect records) and "true" (do not
load suspect records). By default (i.e.
if this property is not specified),
suspect records are not loaded.
The loading of duplicate traffic
records (i.e. records already existing
in the SPOTS DB for the same object
instance, same measurement and
same timestamp) is controlled by the
value of this property defined in SAS
ForceUpdateRecords Boolean FALSE
configuration file. The possible values
are "false" (discard duplicate records)
and "true" (overwrite existing
records). By default (i.e. if this
property is not specified), duplicate
records are discarded.
When an object does not have a
UserLabel defined, the SAS creates a
default UserLabel that is initialized
with the object’s MOI.

The flag DumpUserLabels is used by


the SAS to know what to do when
export/import operations of Extended
Fields are executed, when the
UserLabel==MOI.

DumpUserLabels Boolean If DumpUserLabels=True: FALSE


- during Import, the UserLabel is
inserted in the databse
- during export the UserLabel is
writed to a file.
If DumpUserLabels=False (default):
- during import, the UserLabel is
rejected (Reason: Object Instance
with same User Label)
- during export, the UserLabel is
not written to a file.

Maximum number of retries for SAS


to forcibly shut down, if it is in fact due
ShutDownRetries Number 10
to terminates, but there are still
pending jobs.
Decimal precision for number of
dpNUMBER Number 0
events occurrences on all reports.
Decimal precision for percentages on
dpPERCENTAGE Number 2
all reports.
Decimal precision for seconds all
dpSECOND Number 0
reports.
Decimal precision for deciseconds on
dpDECI_SECOND Number 0
all reports.
Decimal precision for milliseconds on
dpMILI_SECOND Number 0
all reports.
Decimal precision for microseconds
dpMICRO_SECOND Number 0
on all reports.
dpERLANG Number Decimal precision for erlang on all 2

256 E200613-01-115-V14.0I-34
Property Type Description Default value
reports.
Decimal precision for decierlang on
dpDECI_ERLANG Number 2
the all reports.
Decimal precision for millierlang on
dpMILI_ERLANG Number 2
the all reports.
Decimal precision for megaerlang on
dpMEGA_ERLANG Number 2
the all reports.
dpERLANG_TIMES_SE Decimal precision for erlangs per
Number 2
COND second on all reports.
dpERLANG_TIMES_HO Decimal precision for erlangs per
Number 2
UR hour on all reports.
dpDECI_ERLANG_TIME Decimal precision for decierlangs
Number 2
S_SECOND second on all reports.
dpDECI_ERLANG_TIME Decimal precision for decierlangs per
Number 2
S_HOUR hour on all reports.
Decimal precision for the number of
dpNUMBER_PER_SEC
Number events occurrences per second on all 0
OND
reports.
Decimal precision for all other units
dpDEFAULT Number 2
on all reports.

SPOTS Application Server (file “l$SPOTS_DIR/data/loader.cfg”)

Property Type Description Default value


Number of files to be loaded
simultaneously. To maximize the
system performance this integer
value ranging from 1 to 20 should be
set according to the following
LoaderThreads Number principle 1: 2
Int (2 * Nº of processors / Nº of
loaders)
See also
MaximumSimultaneousInvocationsPe
rClient property.

SPOTS Database Server (file “sds.cfg”)

Property Type Description Default value


DS@<SDS host
ServerID String Server identifier
name>
Fully qualified domain host
Fully qualified domain
LocalHost String name or IP Address of the local
host name
host
TCP IP port to be used by the
TcpPort Number sns + 2
SDS
$SPOTS_DIR/log/sds.l
LogFile String Name of the Log file
og
NamingServerHost String Fully qualified domain host Fully qualified domain

1 The ‘Nº of loaders’ value stands for the number of loader commands running
simultaneously, for instance, when you’re following a load per file type (trf, spr, exp,
ascii…) approach.
Example:
For a machine with 4 processors running 3 simultaneous loaders (trf, spr and exp) set
the ‘LoaderThreads’ property to 2. To be more precise the ‘LoaderThreads’ property
value is equal to the integer part of the value given by ‘(2 * 4 / 3)’.

E200613-01-115-V14.0I-34 257
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Property Type Description Default value


name or IP Address of the host name
associated SNS
TCP IP port of the associated
NamingServerPort Number 50000
SNS
Interval in seconds to “ping”
PingPeriod Number 120
other servers
Identifies the database to which
SDS should connect.
DatabaseName String This designation is defined in file lm_spot
$TNS_ADMIN/tnsnames.ora of
Oracle
Contains the number of Oracle
ConnectionPoolSize Number connections kept in a pool for re- 20
usage
Enables the monitoring of heavy
Trace Boolean data base operations; the result FALSE
is written in the LOG file
Flag for log-messages with time
LogFileTimeStampGMT Boolean FALSE
stamp in GMT
Number of days to retain
detailed data for granularities
bigger than 5 minutes when
Data Partition has been set as
NumberDaysInDetailPartiti “Use Partitioning” during
Number 15
on_86400 installation.
The minimum values accepted
are the default. If lower values
are used then the default is
assumed.
Number of alarms stored in
AlarmLogSize Number 1000
database
Minimum percentage of data, for
the aggregation interval, needed
to perform any Data
MinimumPercentageIntegr Aggregation.
Number 60
ationData Make sure to set the same value
in ‘sas.cfg’ (used for example in
Ad-Hoc reports) and ‘sds.cfg’
(used for example in mkhistory).
Number of months to retain
NumberMonthsInHistorical historical data.
Number Not defined
Partition This configuration serves to
introduce the sliding window.

258 E200613-01-115-V14.0I-34
Annex 4 – SPOTS RT Configuration

E200613-01-115-V14.0I-34 259
Overview
This annex describes variables and configuration parameters based on CPU and physical
memory available in SPOTS server machine, as well as the best practices in RT configuration
with the purpose of obtaining a good performance and stability of the system and other relevant
issues
This manual provides hints on the best RT practices, such as agency and one agent for each
data type as the best practice to the best RT performance. Agency uses just one of the CPU’s
available on the system and therefore if two or more agents are configured in same agency,
they share the CPU and have concurrency to same CPU during the iteration load

Number of loader threads

This parameter can be changed in SPOTS configuration file loader.cfg located in the directory
/opt/spots/pms/data by clicking on changing parameter LoaderThreads

bash-3.00$ cat loader.cfg


# Loader Configuration File

LocalHost=ol213
LoaderThreads=2
LogFileTimeStampGmt=false

This change in the parameter allows the number of threads launched by each loader to
increase or to decrease, which means that the number of files that are converted in parallel by
their respective converter are called by the loader type defined resulting on an improvement of
the speed with which the files are converted and loaded to the DB.

Crontab schedule scripts that perform collection and loading

These scripts are scheduled in SPOTS user crontab, have the purpose of performing the
collection and loading commands for the most of SPOTS installations, as they are called in
sequential way.

The following paragraph illustrates an example of a basic and typical script


**************************************************************************************************
!#/usr/bin/ksh
Collector –t TRF_cyclic
Collector –t Q3

Loader –t TRF_cyclic
Loader –t Q3
After you define the collection of q3 files, the collection of cyclic files starts. The conversion and
loading of the q3 files only start after. This process is cyclic and in this way shell scripts are

E200613-01-115-V14.0I-34 261
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

interpreted in solaris. In order to speed up the process and improving the performance, parallel
executions should be used and the process should run in background.
Add an “&” after the line that you want to start launching in background, as shown in the
example below:.
**************************************************************************************************
!#/usr/bin/ksh
Collector –t TRF_cyclic&
Collector –t Q3
Loader –t TRF_cyclic&
Loader –t Q3
The collection of cyclic and q3 data files is made in parallel, as well as the conversion and
loading of cyclic and q3 data files.

RT Agencies and Agents per each Agency should be configured/created?

Agency starts in SPOTS V13M by default, in SPOTS. This parameter can be changed for the
value needed: 512MB, 1024MB and others.

The best performance is achieved by joining one agency with one agent..

In case you have one agency with all the agents needed:
The problem is that each agency uses one CPU and all the agents configured use,
consequently, the same CPU, which decreases the performance! On the other hand if the
crontab job doesn’t use the processes in the background, each data type, arrives to the agency
in a sequential way(?) creating, therefore, no concurrency on the CPU!

More complex scenarios:


This configuration is time consuming to perform, and in the case the customer has three types
of data it causes the analysis of several agencies/agents and several logs to monitor.

1 Agency & 1 Agents x 3 Data types = 3 Agencies =3 Agents


Notes:
1. Memory allocations
1 Agency 256MB / 3 Agencies 768MB
1 Agency 1024MB / 3 Agencies 3072 MB ≈ 3GB physical memory allocated!!

In case one agency/agent is not enough to analyze all data that come from data type it is
necessary more agencies/agents for performance, CPU usage and memory issues..

2 x (1 Agency & 1 Agents ) x 3 Data types = 6 Agencies = 6 Agents


Notes:
1. Memory allocations

262 E200613-01-115-V14.0I-34
1 Agency 256MB / 6 Agencies 1536 MB
1 Agency 1024MB / 6 Agencies 6144 MB ≈ 6 GB physical memory allocated!!

More complex scenarios can be configured as the example below:

(1 Agency & 2 Agents ) x 3 Data types = 3 Agencies = 6 Agents


Notes:
1. Memory allocation
1 Agency 256MB / 3 Agencies 768 MB
1 Agency 1024MB / 3 Agencies 3072 MB ≈ 3 GB physical memory allocated!!

2.CPU concurrency
Here the problem is that each agency uses one CPU and all the agents configured use,
consequently, the same CPU which decreases performance! On the other hand if the crontab
job doesn’t use the processes in the background. ,each data type arrives to the agency in a
sequential way(?) creating, therefore, no concurrency on the CPU!
2 x ( 1 Agency & 2 Agents ) x 3 Data types = 6 Agencies =12 Agents
Notes:
1. Memory allocation
1 Agency 256MB / 6 Agencies 1536 MB
1 Agency 1024MB / 6 Agencies 6144 MB ≈ 6 GB physical memory allocated !!

2.CPU concurrency
The problem is that each agency uses a CPU and all the agents configured use, consequently,
the same CPU which decreases the performance! On the other hand if the crontab job doesn’t
use the processes in the background, each data type arrives to the agency in a sequential
way(?) creating, therefore, no concurrency on the CPU!

Agents configuration files:

E200613-01-115-V14.0I-34 263
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

• $SPOTS_DIR\data\dataproxy\properties.cfg

RTATimestamp parameter controls the data that is sent by DataProxy to the agencies
- Default value is 2 (value in hours)
- Short periods mean that less data is sent by DataProxy to the agents and also that the
agencies/agents have less load. However, an amount of historical information is lost.

• pdc.properties

The configuration file for each agent type installed is located at:
/opt/spots pms/server_rt/agent/<AgentType>/properties/

There still are some parameters active (OR NOT) in this configuration file. Should they be used
for any change? In (BOLD)

# Aggregation flag.
aggregation.flag=FALSE
# Debug level
agent.debug.level=0
#
# Write Alarms To File
agent.write.alarms.to.file=FALSE
#
# Supported Granularity - 300, 900, 1800, 3600, 86400
supported.granularities=300,900,1800,3600,86400
#
# Pdc types for making SPF output file
#
# pdcType1 - 1st pdc Type
# pdcType2 - 2nd pdc Type
# ...
# pdcTypeN - Nth pdc Type
#
#pdcType1=CS
agent.write.data.to.file=FALSE

Important Parameters:

This parameter is set for period backwards in which agents accept the amount of data
to process, for example, if the data to process has 900 as granularity value, this value 10
means 2,5 hours time backwards of data accepted by agent to be processed.

264 E200613-01-115-V14.0I-34
• Configuring $SPOTS_DIR\data\dataproxy\agent_routes.cfg file
The SPOTS Data Collection (SPOTS DC) is able to select the correct RT Agent to which it shall
send its converted data records, by requesting the list of RT Agents available from SPOTS
Naming Server (SNS).

The protocol that allows the SPOTS DC to select the correct RT Agent is based on a routing
table available on file ‘agent_routes.cfg’, where the optional attributes are the NE name and
Measurement to map data records to the correct RT Agent/Agency.

File characteristics:
This routing file contains a rule list that is used by the Converters to select which RT
Agent/Agency receives the data.
A rule occupies only one line.
The file must contain at least one valid rule line.
Rules that appear first have precedence over rules that appear later.
The default configuration rule line is defined as *;*;*

The format of each rule line inside this file shall be the following (semicolon separation):

<NE name>;<Measurement name>;<RT Agency Name>

Where:
• <NE name> - Name of a Network Element, as known in the SPOTS Database. If an
unknown or missing NE is given, then this rule is considered invalid.
• <Measurement name> - Name of the SPOTS measurement. Must be a known SPOTS
Measurement. If an invalid or missing measurement is given, then this rule is
considered invalid.
• <RT Agency Name> - Name of an Agency where the Agent that processes the Data
Type associated with <Measurement name> is registered. If there are no agent that can
process the required data type that is present in the given <RT Agency>, then this rule
is considered invalid.

Wildcards can be used in any of the fields. Wildcards can have the following format:

The ‘*’ character represents any given number of characters. Example ‘NE*’ can represent
‘NE1234’ or ‘NEabcd’.

The ‘?’ character represents any character. Example: ‘NE?’ can represent ‘NE1’ or ‘NEa’.

The Wildcard format can be mixed, which means, the wildcard ‘*’ can appear one or more times
in the same field, and that also applies to the ‘?’ wildcard. Both the ‘*’ and ‘?’ wildcards can
appear mixed in the same field.

A default configuration rule exists for this file, in case no manual configuration is performed by
the SPOTS Administrator. This default configuration rule is defined as:

E200613-01-115-V14.0I-34 265
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

*;*;*

In this default configuration, the SPOTS DC will choose the first available agent, using the
following algorithm:

Read agent list from the SNS.

Iterate through each of the agents present on the agent list and choose the first one that
supports the required Data Type. If no agent is found, do not send data to an RT Agent
and issue an error on the SPOTS DC logs.

Some examples of rule lists are given below:

NE1;*;Agency1
*;*;*

In this example the SPOTS DC will send all data belonging to NE1, to Agency1.
The SPOTS DC will send data to the registered agents of Agency1, according to the Data Type
of the data to send. It is assumed that the Agency1 has all required agents registered. The last
rule is the default and the behavior is the same as described earlier.

*;TGRP;Agency1
*;*;*

In this example the data that belongs to the measure TGRP is sent to the Agent that processes
the required Data Type in Agency 1. The next rule in the list is the default rule.

IMPORTANT NOTE
The <Measurement Name> and Data Type are linked, because a specified measurement is
processed by an RT Agent registered with only a specified Data Type. This link is more relevant
when Virtual Counters are involved.
In this case all the associated Measurements for a given NE and Object Class must be sent to
the same agent, so that all the required Virtual Counters are correctly calculated and
Thresholds for those Virtual Counters can be evaluated.

 It is up to the SPOTS Administrator to ensure that the rules follow these details. No
provision will be made on SPOTS to impose these restrictions.

Hints :

266 E200613-01-115-V14.0I-34
1. If the loader threads increase, for example from default value to four thread converters,
the agencies become overload as the data is sent faster than the usual. This happens,
specially in D and E scenarios.

2. If the loader threads use the default value, but the crontab job scripts are running
background processes (two loader running in parallel - different data types) it also
causes agencies to become overloaded, as data is still sent faster than the usual. This
also happens, specially, in D and E scenarios.

3. If the loader threads increase to four and also background processes are running, what
are the consequences? In this case, if a HW machine with large configuration: four
CPU, double core, 16GB memory, four threads X (times) two loaders in parallel equals
8 threads which means that there are 8 CPUs in use! RT scenario D, with three
agencies each one with to one CPU, but each one with two agents processing data in
parallel, leading, consequently, agencies to “java memory heap space”! Do both
agents in the same agency use the same CPU? Or do they both manage to use
different CPUs?
4. The best scenario(s) is defined by trial and error and by evaluating each customer’s
hardware specifications!

5. The table of contents can be filled in with all the parameters and scenarios to allow the
person configurating RT to have “empiric” evaluations of the best configuration

Spots LoaderThread Loaders’ Agencies Agents Parameters Scenario


HW s background
Small
2 0 1 3 A
2 0 3 3 B
2 1 3 3
…….

Large 2 0 3 3 C
2 1 3 3
2 0 3 6 D
2 1 3 6
2 0 12 12
2 0 6 12 E

4 0 3 3
4 1 3 3
4 0 3 6
4 1 3 6
4 0 6 12

E200613-01-115-V14.0I-34 267
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Annex 5 – Configuration Worksheet

268 E200613-01-115-V14.0I-34
During the initial SPOTS installation, you are asked for parameterisation information to be used
in the system.

You should therefore understand the topics indicated below, and register all the information
indicated in the ensuing Worksheet, before starting the software installation.

Workstation/Server Configuration Parameters

Host name The workstation name, as specified in /etc/hosts


Example: “pms01”
Host IP address The network administrator assigns the workstation/server Internet
Protocol address.
Example: 129.200.9.1
Netmask for subnets The value you enter depends on the internet address class and
whether sub-networks are used (see table below).
Example: 255.255.225.0
Default Router Specify a default IP router or let Solaris installation program find
one. In the first case provide the “Router IP address”.
Geographical region Choose one of the possible values.
Example: “Europe”
Time zone Choose one of the possible values.
Example: “Middle Europe”

Internet address classes and default netmasks

Address Byte1 Byte2 Byte3 Byte4 Default Netmask


Class
A 0-127 1-254 1-254 1-254 255.0.0.0
B 128-191 1-254 1-254 1-254 255.255.0.0
C 192-223 1-254 1-254 1-254 255.255.255.0

SPOTS installation data


SPOTS Naming Server Use this field to record the SPOTS Naming Server IP Address
and TCP/IP Port configured during its installation. This might be
useful since SAS, SDS and SCL all need this information.
Database configuration Decide which SPOTS database configuration to use: Small,
Medium or Large.

Note:
Contact your Nokia Siemens Networks representative to know
which configuration best fits your network and realize the
corresponding hard disk requirements in the following section.

E200613-01-115-V14.0I-34 269
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Global data

Operating System Data


Host Name

Host Internet Address

Netmask for subnets

Default IP router

Geographical region

Time zone

SPOTS Installation Data


SPOTS Naming Server IP Address:

Port:

Database ‫ٱ‬Small ‫ٱ‬Partitioning option


configuration ‫ٱ‬Medium ‫ٱ‬Partitioning option
‫ٱ‬Large ‫ٱ‬Partitioning option

Previous OS installation boot device


old boot-
device

270 E200613-01-115-V14.0I-34
Disk Partition Information for Backup and Restore

If you will make system-oriented backups, the information on the tables below is used to
recover from abnormal situations.
 IMPORTANT NOTE: You must fill the tables in order to recover from disaster situations if
you will make system-oriented backups.

Disk: c__t__d__
Slice File System Name Tag Flag Cylinders Size Tape
(GB) Block (1)

(1) This information is inserted during System Backup and represents the block order
number (within a tape) of the saved file system; it is relevant for the System Restore
mechanism, mainly for the Multiple Tape Backup scenario —see [3].

If a backed up file system spans more than one (consecutive) tape, register which
tapes, and corresponding block order number within them, where it is saved.

E.g.: The first part of the file system ‘/spots_db1’ was saved at the end of tape #1
(starting at block #7) and the remaining at the beginning of tape #2.

The corresponding entry for Tape Block should be: T1/B7, T2/B1

E200613-01-115-V14.0I-34 271
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Disk Partition Information for Fault Tolerance


If you will install fault tolerance, the tables below store information vital to disk
substitution in case of disk failure.
 IMPORTANT NOTE: You must fill all information if you will install fault tolerance.
 The information bellow is just needed for mirrored disks.

Information to fill before OS installation


On the Disk Location Table, fill the following fields, one row for each disk:
• Rack Number: This is the number printed in the rack were the disk is
connected. On machines that do not have rack numbers printed, fill “Up”,
“Down”, “Left” or “Right” or any other word that can help to identify the disk
location. On Sun Fire V240 and V445 fill the information accordingly using two
words, HDDi where i stands for the rack number.
• SN (Serial Number): This number is written in the disk box and in disk itself
when you open the front panel of Sun Fire V240 and V445, on the front off the
Sun fire V240 you will see the following:

HDD2 serial number serial number HDD3

HDD0 serial number serial number HDD1

On Sun Fire V440 you will see the following:

HDD3 serial number

HDD2 serial number

HDD1 serial number

HDD0 serial number

Write down the relation between hard disks and serial number in the disk location table.

Information to fill during OS installation


On the Disk Partition Tables, fill the following fields:
• Disk id: Name used to identify disks (ex c2t4d0). You can obtain the information during
disk partitioning.
• Partition name: the mount point directory if the partition is mountable.
• Cylinders: You can obtain the information during disk partitioning.
• Size (MB): You can obtain the information during disk partitioning.

272 E200613-01-115-V14.0I-34
Information to fill after OS installation
On the Boot Device Table, fill the following information:
• boot device: boot device to boot from in the usual case. Obtain it from the first value of
parameter boot-device in the output of command eeprom.
On the Disk Partition Tables, fill the following fields:
• Geometry: Disk Geometry (ex :SUN4.2G cyl 3880 alt 2 hd 16 sec 135). You obtain the
information with the command format.

Information to fill during disk configuration with Solaris Volume Manager


On the Boot Device Table, fill the following information:
• Alternate boot device: boot device to boot from in case of master boot device failure.
Note that the boot-device parameter is already set with the alternate boot device so the
system automatically boot from there in case of disk failure.

Information to fill after disk configuration with Solaris Volume Manager


On the Disk Partition Tables, fill the following information:
• Mirror and Submirror: refer to section 6.2.1.4 and use the information to fill the fields.
• Rack, SN: Refer to the system Service Manual to associate the disk ids with the SN.

E200613-01-115-V14.0I-34 273
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Boot Device Table


Boot Device

Alternate
boot Device

Disk Location Table


Rack Nr. SN
HDD0
HDD1
HDD2
HDD3

274 E200613-01-115-V14.0I-34
Disk Partition Tables
Print this page as many times as needed (one per mirror) and store them in a safe place.

Disk id: c__t__d__ Rack NR:


SN:
Geometry
Slice Partition name Cylinders Size MB Mirror Submirror
0
1
3
4
5
6
7

Disk id: c__t__d__ Rack NR:


SN:
Geometry
Slice Partition name Cylinders Size MB Mirror Submirror
0
1
3
4
5
6
7

E200613-01-115-V14.0I-34 275
Annex 6 – System Backup & Restore

E200613-01-115-V14.0I-34 277
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

System Backup

 These System Backup procedures are applied only to small DB Installation Types. A
Legato based solution is available to support Backup and Restore for medium and
large DB Installation Types. See [3] for more information.

 For the System Backup procedures you must use a non-rewind tape device

The System Backup script supports a multi-system and/or multi-volume backup; if more than
one tape is necessary, the user will be asked to replace tapes before proceeding.
All running processes will be stopped, before starting the backup.

 Verify that, for each disk, the table “Disk Partitions” presented in the Annex 5 is totally
and correctly filled in.
A worst-case recovery process (when, at least, one of the partitions “/”, “/usr”, need to be
recovered, i.e. the Operating System is unavailable) cannot be accomplished if this table
is either incomplete and/or incorrectly filled.
The needed information for the first column can be obtained doing the command:
# df –k | grep dev

 The information for the remaining columns can be obtained executing the following
sequence of commands:
(a) Login as root user.
(b) Run the command ‘format’.
(c) Choose a disk number (e.g., start with the “lowest”).
(d) Choose the ‘Partition’ option (type “p”).
(e) Choose the ‘Print’ option (type “p”).
(f) Fill in the information in the table “Disk Partitions” (Annex 5).
(g) Quit the current menu, typing “q”.
(h) Type “disk” to choose the next disk.
(i) Go to step (c).

To save the existing environment, the following sequence of commands must be executed:

 Exit from SPOTS application (if it is running).

 Login as root user.


 Verify that SPOTS_DIR environment variable is correctly set. It must point to the
base directory where SPOTS is installed. For further information refer to Annex 1.

278 E200613-01-115-V14.0I-34
 Shutdown all running processes:
# /etc/shutdown -y -g0 -i0
Boot prompt> boot –s
Type control-d to proceed with normal startup,
(or give root password for system maintenance): < root-password>
# mountall

 Insert a new tape into the tape drive; make sure that the tape is ‘write enabled’ (tab at
‘REC’ position).

 Set SPOTS_DIR environment variable pointing to the directory where spotsAS is installed
(by default ‘/opt/spots-pms’) with the following commands, for example:
# SPOTS_DIR=/opt/spots-pms
# export SPOTS_DIR

 Change to the directory “$SPOTS_DIR/bin” and call the System Backup script with a
non-rewind tape device:
# ./sysBackup <tape_device_name>
 Check in your system for the current <tape device name>
(e.g. “/dev/rmt/1n”).
 Check the log file “$SPOTS_DIR/logs/spotsBackup.log” for errors and, for each
backed up file system, fill the column “Tape Block” in the table “Disk Partitions”
(Annex 5) with the corresponding tape (T) and block (B) order numbers.
Information about the successfully dumped file systems can also be found in the
file “/etc/dumpdates”.

 Reboot the system:


# /etc/shutdown –y –g0 –i6

 Eject the tape and protect it against accidental erasure (tab at ‘SAVE’ position).
 Label the tape with:
• the backup date;
• the volume number;
• the system name;
• how to restore tape contents (sysRestore);
• the password of the root user (after a System Restore, this password will
be the one that was valid at the time where the corresponding System
Backup was executed).

 The backup procedure is terminated.

E200613-01-115-V14.0I-34 279
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

System Restore

 These System Restore procedures are applied only to small DB Installation Types. A
Legato based solution is available to support Backup and Restore for medium and
large DB Installation Types. See [3] for more information.

 For the System Restore procedures you must use a non-rewind tape device

SPOTS provides two different processes for restoring a full System Backup:

Single Tape Backup


The script “sysRestore”, leading the user through the recovery of each file system,
automatically performs the restoring activities.
 The System Restore script assumes that the partition settings in the current
disk(s) (controller / tray / disk / slice) match, one by one, those that are stored in
the tape (and therefore, as it was in the backed-up disk at the backup time).

Multiple Tape Backup


If more than one tape was used for backup, the restoring activities will be requested by
the user, using, as reference, the information that was registered in the column “Tape
Block” of the table “Disk Partitions” (Annex 5).

For both, three different recovery mechanisms are used, according to the availability or not of
the Operating System (OS):
• Standard OS Disk Recovery (1.1)/(2.1)
If the system is not bootable (at least one of the partitions “/”, “/usr” need to be
recovered).
• Mirrored OS Disk Recovery (1.2)/(2.2)
If the root ( “/” ) partition is using mirroring
• Non-OS Disk Recovery (1.3)/(2.3)
The system is bootable.

 Database file systems (/spots_db*) must not be restored separately. If you need to
restore any /spots_db* partition you must restore them all.
 When restoring all spots_db* file systems (don’t restore them in separate, otherwise
the database will be broken) restore also the /opt partition.

280 E200613-01-115-V14.0I-34
1 Single Tape Backup

1.1 OS Disk Recovery

In this situation the system is not bootable. Therefore, you must boot from the Solaris
installation DVD, re-create, if necessary, the partitions on the hard disk (e.g., in the case you
are using a new and unformatted disk) and recover these file systems:

 Insert the Solaris 10 Software DVD and enter the following command:
ok boot cdrom –s

 The next two steps are applied only if the current disk partitioning differs from the one that
is previously registered in the table “Disk Partitions” (Annex 5).

 For each disk, restore the partition table:


(a) Run the command ‘format’.
(b) Choose a disk number (e.g., start with the “lowest”).
(c) Choose the ‘Partition’ option (type “p”).
(d) Choose the ‘Print’ option (type “p”).
(e) For the all the disk partitions, provide the information as defined in the table “Disk
Partitions” (Annex 5).
(f) Label the disk, running the command ‘label’ and quit the current menu, typing “q”.
(g) Type “disk” if there are additional disks to be formatted; otherwise, quit format
utility typing “q”.
(h) Go to step (b).

 Create the file system for the swap partition (for the remaining partitions, this will be done
by System Restore script):
# newfs /dev/rdsk/<device name>

 For details about the partition’s <device name> (e.g., c0t0d0s0), refer to the table
“Disk Partitions” in the (Annex 5)

 Create tape device entries:


# devfsadm
# cfgadm -al

 Ignore any error message that may appear like this one:
devfsadm: mkdir failed for /dev 0x1ed: Read-only file system

 Insert the backup tape into the tape drive, extract the System Restore script and execute
it:
# mt -f <tape_device> rewind
# dd bs=8k if=<tape_device> of=/tmp/sysRestore
# chmod 755 /tmp/sysRestore
# /tmp/sysRestore <tape device name>

E200613-01-115-V14.0I-34 281
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Check in your system for the current <tape device name>


(e.g. “/dev/rmt/1n”).
 During the execution of the script, a menu-driven utility will guide the user for the
recovery of each file system.

 Reboot the system:


# init 6

 The system disk is recovered and restored.

1.2 Mirrored OS Disk Recovery

In this situation the system is not bootable and the root partition was mirrored. Therefore, you
must boot from the Solaris installation CD, re-create, if necessary, the partitions on the hard
disk (e.g., in the case you are using a new and unformatted disk) and recover these file
systems:

 Note: If you have one (or more) external disk Arrays present in the system (e.g. Sun
StorEdge 3320), make sure the Array is turned on during this phase.

 Insert the Solaris 10 Software DVD and enter the following command:
ok boot cdrom –sw

 The next two steps are applied only if the current disk partitioning differs from the one that
is previously registered in the table “Disk Partitions” (Annex 5).

 For each disk, restore the partition table:


(i) Run the command ‘format’.
(j) Choose a disk number (e.g., start with the “lowest”).
(k) Choose the ‘Partition’ option (type “p”).
(l) Choose the ‘Print’ option (type “p”).
(m) For the all the disk partitions, provide the information as defined in the table “Disk
Partitions” (Annex 5).
(n) Label the disk, running the command ‘label’ and quit the current menu, typing “q”.
(o) Type “disk” if there are additional disks to be formatted; otherwise, quit format
utility typing “q”.
(p) Go to step (j).

 Create the file system for the swap partition (for the remaining partitions, this will be done
by System Restore script):
# newfs /dev/rdsk/<device name>

 For details about the partition’s <device name> (e.g., c0t0d0s0), refer to the table
“Disk Partitions” in the (Annex 5)

 Create tape device entries:


# devfsadm

 Ignore any error message that may appear like this one:
devfsadm: mkdir failed for /dev 0x1ed: Read-only file system

282 E200613-01-115-V14.0I-34
 As root user, load the backup tape #1 into the tape drive, extract the file File System
Name List (“/tmp/fsnlist”) and read it, executing these commands:
# mt -f <tape device name> rewind
# dd bs=8k if=< tape device name > of=/dev/null
# dd bs=8k if=< tape device name > of=/tmp/fsnlist
# cat /tmp/fsnlist
 Check in your system for the current <tape device name>
(e.g. “/dev/rmt/1n”).

 The lines in the file File System Name List were produced according to this syntax:
<seq_number> <partition> <file_system>

where:
<seq_number> is a sequential number for ordering the file systems that were backed
up.
<partition> is the device name to which the file system will be restored.
<file_system> is the name of the file system to be restored.

Example:
1 /dev/md/rdsk/d100 /
2 /dev/md/rdsk/d120 /var/opt
3 /dev/md/rdsk/d70 /export/home
4 /dev/md/rdsk/d80 /opt
5 /dev/rdsk/c3t1d0s0 /spots_db1
6 /dev/rdsk/c3t1d0s1 /spots_db2
7 /dev/rdsk/c4t1d0s0 /spots_db3
8 /dev/rdsk/c4t1d0s1 /spots_db4
9 /dev/rdsk/c2t0d0s0 /spots_db5
10 /dev/rdsk/c2t0d1s0 /spots_db6
From the list of file systems, identify the “/” filesystem .

 Execute the following command:


# /usr/bin/mt -f <tape device name> rewind

 Additionally, and only if tape #1 has been loaded (or in the case of a single tape
bakcup), execute the following command:
# /usr/bin/mt -f <tape device name> fsf 2
 Check in your system for the current <tape device name>
(e.g. “/dev/rmt/1n”).

 Move the tape to the beginning of the desired file system, executing the following
command:
# /usr/bin/mt –f <tape device name> fsf [(b[i] – b[i-1])-1]
Where:
bi is the block order number within a tape for the file system to be recovered —
see the Tape Block information in the “Installation Configuration Worksheet / Disk
Partitions” (Annex 5).
bi-1 is the block order number within the same loaded tape of the previously
recovered file system. Assume bi-1 = 0 if, for the same loaded tape, no file system had
been restored yet.

 Mount the desired file system in the correct ‘controller / tray / disk / slice’:
# mount /dev/dsk/<device name> /mnt

E200613-01-115-V14.0I-34 283
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 For details about the partition’s <device name> (e.g., c0t0d0s0), refer to the table
“Disk Partitions” in the (Annex 5)

 Change to the directory “/mnt” and restore the file system, executing the following
commands:
# cd /mnt
# ufsrestore rfv <tape device name>

 Change the terminal settings by executing:


# TERM=vt100
# export TERM

 Edit with vi the /mnt/etc/system and remove all lines between and including:
* Begin MDD root info (do not edit)
(…)
* End MDD root info (do not edit)

 Edit with vi the /mnt/etc/vfstab and:


o Delete all lines starting with /dev/md/dsk
o Uncomment all lines (including the replicas) that are related to the original disk
partitions defined in Annex 4.
o Add one line referring to the root (“/”) partition with the correct device name.
e.g.:

/dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs 1 no -

 Create the mounting points for the replicas and mirrors:


# cd /mnt

# mkdir replica1
# mkdir replica2
# mkdir replica3
# mkdir replica4
# mkdir replica5
# mkdir replica6
# mkdir replica7
# mkdir replica8
# mkdir root_mirror
# mkdir swap_mirror
# mkdir home_mirror
# mkdir var_opt_mirror
# mkdir opt_mirror

 For configurations where spots_db partitions were in the internal disks the mirror
directoryes should now be recreated (for i in 1 to 6 accordingly with the used
configuration):

284 E200613-01-115-V14.0I-34
# cd /mnt

# mkdir spots_db(i)_mirror

 Create the filesystem for each replica1, replica2, replica3, replica4, swap and
swap_mirror partitions, and also for spots_db3 and spots_db4 partitions if they were
defined in the internal disks:
# newfs /dev/rdsk/<device name>

 For details about the partition’s <device name> (e.g., c0t0d0s0), refer to the table
“Disk Partitions” in the (Annex 5)

 Additionally, create the filesystem for the other partitions (and its mirror) that you intend to
restore e.g. “/opt” and “/opt_mirror”, “/export/home” and “/home_mirror”, etc.:
# newfs /dev/rdsk/<device name>

 For details about the partition’s <device name> (e.g., c0t0d0s0), refer to the table
“Disk Partitions” in the (Annex 5)

 Remove the previous Fault Tolerance directories used by Spots:


# rm –rf /mnt/var/diskman/step1 /mnt/var/diskman/step2

 Change to the directory “/” and un-mount the file system “/mnt”, executing these
commands:
# cd /
# umount /mnt

 Replace the boot block on the hard disk:


#installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/<devicename>

 <device name> is the root “/” device e.g. c0t0d0s0, refer to the table “Disk
Partitions” in the (Annex 5)

 Note: If you have turned off the external disk Arrays present in the system (e.g. Sun
StorEdge 3330), you must turn the power on now.

 Reboot the system, this time from the disks with:


# reboot -- -r

 Install and configure the mirroring as it is described in Chapter 6 - Fault Tolerance with
disk mirroring

 The (system) disk is restored.

 Recover the remaining partitions from the tape, as it is described in this annex in section
1.3 - Non-OS Disk Recovery (or 2.3).

E200613-01-115-V14.0I-34 285
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

1.3 Non-OS Disk Recovery

 NOTE: Storedge configuration recovery


In the case of one (or more) external disk Arrays are present in the system, and a loss of
configuration occurs, use an external computer to connect to the SE3320 Management
Console (serial connection), and reconfigure it (logical drives + host luns) according to
the procedure described in the proper Annex of this Installation Guide (Annex 6 or
Annex 7).
If you lost the data in partition “/opt”, then you need to reinstall the Storedge software,
refer to Chapter 7 - SPOTS Configurations with .

In this situation the system is bootable. Therefore, the restore mechanism is restricted to load
and execute the System Restore script:

 Stop SPOTS, including all scheduled jobs and daemons — see Stopping SPOTS,
Chapter 4, Section 4.1.

 Shut down the Oracle instance, if it exists, executing the following command as root
user:
# /etc/init.d/dbora stop

 Bring the system to Single User Mode:


# /etc/shutdown -y -g0 -iS

 Unmount all file systems:

# umount –a

 Make sure that all file systems that you plan to restore are correctly unmounted. If one or
more file systems are not correctly unmounted, execute the steps described in Chapter
6.2.7 - Detect and terminate processes that are using a filesystem.

 Insert the backup tape into the tape drive, extract the System Restore script and execute
it:
# mt -f <tape_device> rewind
# dd bs=8k if=<tape_device> of=/tmp/sysRestore
# chmod 755 /tmp/sysRestore
# /tmp/sysRestore <tape device name>
 Check in your system for the current <tape device name>
(e.g. “/dev/rmt/1n”).
 During the execution of the script, a menu-driven utility will guide the user for the
recovery of each file system.

 Reboot the system:


# /etc/shutdown –y –g0 –i6

 The (non-system) disk is restored.

286 E200613-01-115-V14.0I-34
2 Multiple Tape Backup

2.1 OS Disk Recovery

 Proceed with the steps 1 through 4 of the similar section of Single Tape Backup (this
Annex, Section 1.1).

 Collect the information related with the file systems to be restored — see Planning File
Systems Recovery (this Annex, Section 2.1.1).

 Restore the file systems — see Restoring File Systems (this Annex, Section 2.1.2).

 Reboot the system:


# /etc/shutdown –y –g0 –i6

 The system disk is recovered and restored.

2.1.1 Planning File Systems Recovery

Before restoring file systems from a full System Backup, check which are available in the
tapes and take in account the further considerations in this section.

 As root user, load the backup tape #1 into the tape drive, extract the file File System
Name List (“/tmp/fsnlist”) and read it, executing these commands:
# mt -f <tape device name> rewind
# dd bs=8k if=< tape device name > of=/dev/null
# dd bs=8k if=< tape device name > of=/tmp/fsnlist
# cat /tmp/fsnlist
 Check in your system for the current <tape device name>
(e.g. “/dev/rmt/1n”).

 The lines in the file File System Name List were produced according to this syntax:
<seq_number> <partition> <file_system>
where:
<seq_number> is a sequential number for ordering the file systems that were backed
up.
<partition> is the device name to which the file system will be restored.
<file_system> is the name of the file system to be restored.

Example:

E200613-01-115-V14.0I-34 287
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

1 /dev/rdsk/c2t1d0s0 /
2 /dev/rdsk/c2t1d0s5 /var
3 /dev/rdsk/c2t1d0s3 /export/home
4 /dev/rdsk/c2t1d0s4 /opt
5 /dev/rdsk/c1t1d0s0 /spots_db1
6 /dev/rdsk/c1t2d0s0 /spots_db2
7 /dev/rdsk/c1t3d0s0 /spots_db3
8 /dev/rdsk/c1t4d0s0 /spots_db4
9 /dev/rdsk/c1t5d0s0 /spots_db5
10 /dev/rdsk/c1t6d0s0 /spots_db6

From the list of file systems, select those to be restored, taking in account that file systems must
be restored in the ascending sequential order number in which they are described in the file File
System Name List.
According to the previous selection, identify the tapes that will be used in the restore process,
reading the Tape Block information that was registered in the “Installation Configuration
Worksheet / Disk Partitions” (Annex 5).

2.1.2 Restoring File Systems

 NOTE: Storedge configuration recovery


In the case of one (or more) external disk Arrays are present in the system, and a loss of
configuration occurs, use an external computer to connect to the SE3320 Management
Console (serial connection), and reconfigure it (logical drives + host luns) according to
the procedure described in the proper Annex of this Installation Guide (Annex 6 or
Annex 7).

For each file system to be restored, execute, as root user, the following steps:

 If the tape’s file system doesn’t exist in the disk at restore time, create it in the target disk
partition, executing the following command:
# newfs /dev/rdsk/<device name>

 Refer to the table in the “Installation Configuration Worksheet / Disk Partitions”


(Annex 5) for details about the partition’s device name (e.g., c0t0d0s0).

 If the tape with the beginning of the file system to be restored is not loaded, load it and
execute the following command:
# /usr/bin/mt -f <tape device name> rewind
Additionally, and only if tape #1 has been loaded, execute the following command:

# /usr/bin/mt -f <tape device name> fsf 2


 Check in your system for the current <tape device name>
(e.g. “/dev/rmt/1n”).

 Move the tape to the beginning of the desired file system, executing the following
command:
# /usr/bin/mt –f <tape device name> fsf [(bi – bi-1)-1]

288 E200613-01-115-V14.0I-34
where:
bi is the block order number within a tape for the file system to be recovered —
see the Tape Block information in the “Installation Configuration Worksheet / Disk
Partitions” (Annex 5).
bi-1 is the block order number within the same loaded tape of the previously
recovered file system. Assume bi-1 = 0 if, for the same loaded tape, no file system had
been restored yet.

 Mount the desired file system in the correct ‘controller / tray / disk / slice’:
# mount /dev/rdsk/<device name> /mnt

 Change to the directory “/mnt” and restore the file system, executing the following
commands:
# cd /mnt
# /usr/bin/ufsrestore rfv <tape device name>
 If the file system to be restored is spanned in more than one tape, when the end of
the first one is reached, a message is prompted, asking for changing the tape to
proceed with the recovery.

 Change to the directory “/” and un-mount the file system “/mnt”, executing these
commands:
# cd /
# umount /mnt

2.2 Mirrored OS Disk Recovery

 The procedure to be used is the same as described in Section 1.2 of this Annex.

E200613-01-115-V14.0I-34 289
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

2.3 Non-OS Disk Recovery

 Stop SPOTS, including all scheduled jobs and daemons — see Stopping SPOTS,
Chapter 4, Section 4.1.

 Reboot the system:

# init s

 Login as root user.

 Shut down the Oracle instance, if it exists, and un-mount all existing file systems,
executing the following commands:
# /etc/init.d/dbora stop
# cd /
# umount -a

 Make sure that all file systems that you plan to restore are correctly unmounted. If one or
more file systems are not correctly unmounted, execute the steps described in Chapter
6.2.7 - Detect and terminate processes that are using a filesystem.

 Collect the information related with the file systems to be restored — see Planning File
Systems Recovery (this Annex, Section 2.1.1).

 Restore the file systems — see Restoring File Systems (this Annex, Section 2.1.2).

 Reboot the system:


# /etc/shutdown –y –g0 –i6

 The (non-system) disk is restored.

290 E200613-01-115-V14.0I-34
Annex 7 – External Storage Setup for
Medium Configuration

E200613-01-115-V14.0I-34 291
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Spots StorEdge Medium A Configuration

 This annex should only be used for the Medium A Configuration. Server is a Sun
Fire V445.

 Login as root user.

 Since you will need a second terminal for completing this process, you will need
to edit file /etc/default/login and comment the following line:
# If CONSOLE is set, root can only login on that device.
# Comment this line out to allow remote login by root.
#
CONSOLE=/dev/console

 Remember after the installation is done to uncomment this line again, since this is
a security hazard.

 Execute the following StorEdge Configuration CLI commands:


# sccli
sccli> show inquiry

If the Revision (Firmware version) of your SE3320 is 3.25S execute:


sccli> set drive-parameters auto-detect-swap-interval 60000
sccli> set drive-parameters polling-interval 30000
sccli> exit
#

If the Revision Firmware version) of your SE3320 is 4.12E or superior execute:


sccli> set drive-parameters auto-detect-swap-interval 60s
sccli> set drive-parameters polling-interval 30s
sccli> exit
#

 Execute the following commands:


# TERM=vt100
# export TERM
# tip –38400 /dev/ttyb

 The following window appears:


 You may have to refresh the screen by pressing CTRL+L

292 E200613-01-115-V14.0I-34
 The selection is done with ”Enter” button, confirmation is sometimes done with ”ESC

Figure 28, Interface for StorEdge 3320 Configuration

 Select “Terminal (VT100 Mode)”, and the following window appears:

Figure 29, Main Menu window

E200613-01-115-V14.0I-34 293
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Removing all Host Luns

 Select “view and edit Host luns”, and select “CHL 1 ID 0 (Primary controller)”:

Figure 30, Main Menu Channel selection

 Select the “LUN 0” by pressing the “Enter” key:

Figure 31, Main Menu Unmap LUN

 Choose “Yes”

 Removing remaining Host Luns if they exist. Hit the “Esc” key to exit the LUN
table for the Channel 1 (CHL 1).

 Using the arrow keys execute the same process for Channel 3 LUN (CHL 3),
see Figure 30, Main Menu Channel selection.

294 E200613-01-115-V14.0I-34
 After removing all the Luns from the StorEdge, hit the “ESC” key several times
until you are in the Main Menu window

Figure 32, Main Menu window

E200613-01-115-V14.0I-34 295
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Removing all logical drives

 Using the arrow keys select “view and edit Logical drives”, and the following
window appears:

Figure 33, Logical Drives table

 Select the first logical drive “P0” by pressing the “Enter” key. The following
window will appear:

Figure 34, Actions for Logical Drives

 Move the cursor and select “Delete logical drive” by pressing the “Enter” key.

 Select “Yes” in the confirmation window.

296 E200613-01-115-V14.0I-34
 Now proceed by deleting the remaining drives, execute the same steps as for
the first logical-drive (see Figure 33, Logical Drives table, the configuration
provided in the table is just an example of a configuration that was done on the
StorEdge device and defers from the original one).

 The table of logical drives now appears empty.

E200613-01-115-V14.0I-34 297
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Creating Logical Drives

 Select the first empty slot (using the “Enter” key):

Figure 35, Create Logical Drive confirmation

 Select “Yes”.

Figure 36, Raid level selection

298 E200613-01-115-V14.0I-34
 You will now be prompted to select the RAID type that is going to be used in
that logical drive. Select RAID 1.

Figure 37, Disk Selection

 Now, using ENTER key, select the disks that are going to be used in the logical
drive, select the first disk of each channel. After selecting the disks, hit ESC key
twice to confirm.

Figure 38, Logical Drive Creation confirmation

E200613-01-115-V14.0I-34 299
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Confirm the creation of the logical drive selecting the “Yes” option. Some notice
messages related to the logical drive may appear. Hit the ESC key in all of them
and until you return to the logical drive configuration menu.

 Create the second logical drive using RAID1.

Figure 39, Second logical drive creation

 Using the ENTER key, select the disks that are going to be used in the logical
drive. Select all the remaining 10 disks. After selecting the disks hit ESC key
twice and select “Yes” to confirm the creation of the logical drive.

Figure 40, Second logical drive disk selection

300 E200613-01-115-V14.0I-34
 Before creating the logical drive assign it to the secondary controller, by
selecting “Logical Drive Assignments”, Select YES and Esc Key to confirm. Hit
ESC key to remove the informative popup windows that appears. Also make
sure that the stripe size is set to 128K.

Figure 41, Redundant controller assignment

 Now partition the second logical drive.

Figure 42, Partition second logical drive

E200613-01-115-V14.0I-34 301
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Confirm the warning with YES and create the first partition with approximately
half of the logical drive size, in our case 175000 MB.

Figure 43, First partition

 Confirm with ENTER. As a result two partitions are created.

Figure 44, First partition


Hit the Esc key until you are in the main menu.

302 E200613-01-115-V14.0I-34
Creating Host LUN maps

 Select view and edit host luns, select channel 1, the logical drive and the first
available slot.

Figure 45, Logical Drive Selection

 Select the first logical drive and hit ENTER key twice. Confirm the Host Lun
creation.

Figure 46, Map Host Lun confirmation

 Use the same procedure to create 2 host luns on the second channel, 1 host
lun for each partition.

E200613-01-115-V14.0I-34 303
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 47, Second host lun configuration

 Two host luns created on second controller.

Figure 48, Host lun configuration

 You can verify the status in the main menu. (Press ESC until you reach it)

304 E200613-01-115-V14.0I-34
Figure 49, Main Menu

 After having finished the previous steps, run the following commands as user
root:
# update_drv –f sd
(…)
# devfsadm
# /cdrom/cdrom0/storedge/3320.part.ksh

 All the new drives are now available on the operating system.

 Still as root, issue the following command:


# /cdrom/cdrom0/storedge/stor.chg.cron.sh

 Using vi command, edit /etc/spots.ss3320.conf.email and replace the address’s for


the users that will receive notifications in case of hard disk failure.

 To verify that the StorEdge 3320 is properly configured, issue the following
command for the diferent configurations:
# sccli
(...)
sccli> show ld

 You should get an output similar to the following:

LD LD-ID Size Assigned Type Disks Spare Failed Status


-------------------------------------------------------------------------------------------------------------------------------
ld0 3ADD9DE1 146GB Primary RAID1 2 0 0 Good

E200613-01-115-V14.0I-34 305
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

ld1 1F7D914F 876GB Secondary RAID1 10 0 0 Good


 Please check the values of the following columns: Size, Assigned, Type, Disks, Spare,
Failed and Status.

 To quit the StorEdge Command Line Interface issue the folllowing command:
sccli> exit

 The StorEdge 3320 is now fully configured.

Proceed to the Oracle Software Installation on Chapter 8 if you aren’t going to


upgrade your hardware configurations.

306 E200613-01-115-V14.0I-34
Spots StorEdge Medium B Configuration

 This annex should only be used for the Medium B Configuration. Server is a Sun
Fire V490.

 Login as root user.

 Since you will need a second terminal for completing this process, you will need
to edit file /etc/default/login and comment the following line:
# If CONSOLE is set, root can only login on that device.
# Comment this line out to allow remote login by root.
#
CONSOLE=/dev/console

 Remember after the installation is done to uncomment this line again, since this is
a security hazard.

 Execute the following StorEdge Configuration CLI commands:


# sccli
sccli> show inquiry

If the Revision (Firmware version) of your SE3320 is 3.25S execute:


sccli> set drive-parameters auto-detect-swap-interval 60000
sccli> set drive-parameters polling-interval 30000
sccli> exit
#

If the Revision Firmware version) of your SE3320 is 4.12E or superior execute:


sccli> set drive-parameters auto-detect-swap-interval 60s
sccli> set drive-parameters polling-interval 30s
sccli> exit
#

 Since the V490 doesn’t come with a normal rs232 serial port, for the following
steps a standard Windows pc with a serial port is needed, refer to annex 12,
and after the connection has been established return to this chapter.

 A similar window will appear (the screenshots bellow where taken using Solaris and the
interior results from the HyperTerminal window are the same):
 You may have to refresh the screen by pressing CTRL+L

E200613-01-115-V14.0I-34 307
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 The selection is done with ”Enter” button, confirmation is sometimes done with ”ESC

Figure 50, Interface for StorEdge 3320 Configuration

 Select “Terminal (VT100 Mode)”, and the following window appears:

Figure 51, Main Menu window

308 E200613-01-115-V14.0I-34
Removing all Host Luns

 Select “view and edit Host luns”, and select “CHL 1 ID 0 (Primary controller)”:

Figure 52, Main Menu Channel selection

 Select the “LUN 0” by pressing the “Enter” key:

Figure 53, Main Menu Unmap LUN

 Choose “Yes”

 Removing remaining Host Luns if they exist. Hit the “Esc” key to exit the LUN
table for the Channel 1 (CHL 1).

 Using the arrow keys execute the same process for Channel 3 LUN (CHL 3),
see Figure 52, Main Menu Channel selection.

E200613-01-115-V14.0I-34 309
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 After removing all the Luns from the StorEdge, hit the “ESC” key several times
until you are in the Main Menu window

Figure 54, Main Menu window

310 E200613-01-115-V14.0I-34
Removing all logical drives

 Using the arrow keys select “view and edit Logical drives”, and the following
window appears:

Figure 55, Logical Drives table

 Select the first logical drive “P0” by pressing the “Enter” key. The following
window will appear:

Figure 56, Actions for Logical Drives

 Move the cursor and select “Delete logical drive” by pressing the “Enter” key.

 Select “Yes” in the confirmation window.

E200613-01-115-V14.0I-34 311
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Now proceed by deleting the remaining drives, execute the same steps as for
the first logical-drive (see Figure 55, Logical Drives table, the configuration
provided in the table is just an example of a configuration that was done on the
StorEdge device and defers from the original one).

 The table of logical drives now appears empty.

Creating Logical Drives

 Select the first empty slot (using the “Enter” key):

Figure 57, Create Logical Drive confirmation

 Select “Yes”.

312 E200613-01-115-V14.0I-34
Figure 58, Raid level selection

 You will now be prompted to select the RAID type that is going to be used in
that logical drive. Select RAID 1.

Figure 59, Disk Selection

 Now, using ENTER key, select the disks that are going to be used in the logical
drive, select the first disk of each channel. After selecting the disks, hit ESC key
twice to confirm.

E200613-01-115-V14.0I-34 313
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 60, Stripe Size selection

 Alter the Stripe Size to 128k, hit ESC key

Figure 61, Logical Drive Creation confirmation

 Confirm the creation of the logical drive selecting the “Yes” option. Some notice
messages related to the logical drive may appear. Hit the ESC key in all of them
and until you return to the logical drive configuration menu.

314 E200613-01-115-V14.0I-34
Figure 62, Second logical drive creation

 Create the second logical drive using RAID1.

 Using the ENTER key, select the disks that are going to be used in the logical
drive. Select all the remaining 10 disks. After selecting the disks hit ESC key
twice and select “Yes” to confirm the creation of the logical drive.

Figure 63, Second logical drive disk selection

E200613-01-115-V14.0I-34 315
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Before creating the logical drive assign it to the secondary controller, by


selecting “Logical Drive Assignments”, Select YES and Esc Key to confirm. Hit
ESC key to remove the informative popup windows that appears. Also make
sure that the stripe size is set to 128K.

Figure 64, Redundant controller assignment

Figure 65, Alter stripe size to 128KB for the second logical drive

316 E200613-01-115-V14.0I-34
Figure 66, Second logical drive creation

Creating Host LUN maps

 Select view and edit host luns, select channel 3, the logical drive and the first
available slot.

Figure 67, Main Menu Host Luns

E200613-01-115-V14.0I-34 317
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 68, Map Host Lun Controller selection

 Select the primary controller.

Figure 69, Map Host Lun Controller selection

 Select “Yes”.

318 E200613-01-115-V14.0I-34
Figure 70, Select drive

 Select the available drive, using the Enter key.

Figure 71, Map Host Lun message dialog

 Use the Enter key.

E200613-01-115-V14.0I-34 319
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 72, Map Host Lun message dialog

 Use the Enter key and select “Yes”.

 Go back to the <Main Menu>

Figure 73, Controller Selection

 Select the Secondary Controller

320 E200613-01-115-V14.0I-34
Figure 74, Lun Selection

 Select the first empty space and press Enter.

Figure 75, Logical drive selection dialog

 Select the available drive and press Enter.

E200613-01-115-V14.0I-34 321
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 76, Map Host Lun message dialog

 Use the Enter key.

Figure 77, Map Host Lun message dialog

 Use the Enter key and select Yes.

 You can verify the status in the main menu. (Press ESC until you reach it)

322 E200613-01-115-V14.0I-34
Figure 78, Main Menu

 After having finished the previous steps, run the following commands as user
root:
# update_drv –f sd
(…)
# devfsadm

 Remove the SPOTS Performance Management V14.0 DVD.

 Insert the SPOTS Patches DVD

 Install patch p140101-* (where * is the latest release version in the patch DVD,
if it wasn’t already installed).

# /var/3320/patch/3320-ee.part.v490.ksh

 All the new drives are now available on the operating system.

 Still as root, issue the following command:


# /cdrom/cdrom0/storedge/stor.chg.cron.sh

 Using vi command, edit /etc/spots.ss3320.conf.email and replace the address for


the users that will receive notifications in case of hard disk failure.

E200613-01-115-V14.0I-34 323
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 To verify that the StorEdge 3320 is properly configured, issue the following
command for the diferent configurations:
# sccli
(...)
sccli> show ld

 You should get an output similar to the following:

LD LD-ID Size Assigned Type Disks Spare Failed Status


-------------------------------------------------------------------------------------------------------------------------------
ld0 3ADD9DE1 408GB Primary RAID1 6 0 0 Good
ld1 1F7D914F 408GB Secondary RAID1 6 0 0 Good

 Please check the values of the following columns: Size, Assigned, Type, Disks, Spare,
Failed and Status.

 To quit the StorEdge Command Line Interface issue the folllowing command:
sccli> exit

 The StorEdge 3320 is now fully configured.

Proceed to the Oracle Software Installation on Chapter 8 if you aren’t going to


upgrade your hardware configurations.

324 E200613-01-115-V14.0I-34
Spots StorageTek Medium C Configuration

 This annex should only be used for the Medium C Configuration. Server is a Sun
SPARC Enterprise M3000.

In order to access the CAM software, use a browser and load the following URL:

https://cam-management-host:6789

Take in consideration the following:


o Replace the cam-management-host in the URL above with the IP address of the
management host.
o Access to port 6789 must be allowed. Firewall rules might need changes.

Figure 79, CAM authentication web page

In the above image you can see the authentication web page that allows you to access the
CAM. Type the user name and password of the account used to install the CAM software.

Now, using the navigation pane of the CAM, go to “Storage Systems” →


<your_storage_system_name> → “Virtual Disks”. Select the option “New”.

E200613-01-115-V14.0I-34 325
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 80, CAM Virtual Disks

The next step will allow creating Virtual Disks from the available disks in the array. It will be
created two virtual disks, both configured with Raid 1 (also defined as mirroring). This design
will enable them to be fault tolerant.

Choose a name for your Virtual Disk Name and type it in the respective form field. Change the
Configuration to “Custom” and proceed by selecting “Next”.

326 E200613-01-115-V14.0I-34
Figure 81, CAM create Virtual Disks configuration

In the drop down boxes, choose “RAID 1” and “512 KB”, respectively, for Raid Level and
Segment Size. Then choose six disks to be used for the first Virtual Disk from the list of
available disks. You can choose, for exampled, the first six ones. After checking them, you can
select “Calculate VDisk Capacity” to check the capacity of the Virtual Disk to be created. To
proceed, select “Next”.

E200613-01-115-V14.0I-34 327
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 82, CAM create Virtual Disks configuration

The next step is to select the pairs of disks that define a mirror pair. Do this by selecting a disk
from each of the leftmost boxes (Available Drives) and select “Add Drive Pair”. The mirror pairs
are going to be displayed in the box “Mirror Drive Pairs”.

Figure 83, CAM create Virtual Disks, specify mirror pairs

328 E200613-01-115-V14.0I-34
Figure 84, CAM create Virtual Disks, specify mirror pairs

Figure 85, CAM create Virtual Disks, specify mirror pairs

All mirror pairs are now defined. Select “Next”.

E200613-01-115-V14.0I-34 329
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 86, CAM create Virtual Disks, specify mirror pairs

Choose the option “Create Volume”, type a name for the volume, maintain the “1” in the
“Number of Volumes to create:”, for “Volume Size” choose option “Fill One Virtual Disk…” and
assign “A” to “Controller”. Select next to proceed.

330 E200613-01-115-V14.0I-34
Figure 87, Create CAM Virtual Disks, configure volume

Choose option “Map to an Existing Host/Group or the Default Storage Domain”. Select next to
proceed.

Figure 88, CAM Create Virtual Disks, specify volume mapping

Choose the option that corresponds to your hostname and assign an available LUN. Select next
to proceed.

E200613-01-115-V14.0I-34 331
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 89, CAM Create Virtual Disks, select Host or Host Group

Review you configuration and select “Finish”.

Figure 90, CAM Create Virtual Disks, review configuration

332 E200613-01-115-V14.0I-34
The Virtual Disk created is displayed. It will take some time until it is initialized (in the column
“State” there is a message stating “Initializing…”).

Figure 91, CAM Create Virtual Disks summary on Storage Systems

NOTE:
There is a bug in the Sun StorageTek CAM software that sometimes presents a message
stating that the Virtual Disk was created with success, but, in fact, there is no Virtual
Disk created. In case you find this bug, please repeat the procedure to create the Virtual
Disk.

While the first Virtual Disk is being initialized, create the second Virtual Disk, using the available
disks.
Select “New” in the Virtual Disk Summary web page. The following web page will be presented.
Choose a different name from the previous for the Virtual Disk to be created.

E200613-01-115-V14.0I-34 333
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 92, CAM Create Virtual Disks configuration

In the drop down boxes, choose “RAID 1” and “512 KB”, respectively, for Raid Level and
Segment Size. Check all available disks presented. After checking them, you can select
“Calculate VDisk Capacity” to check the capacity of the Virtual Disk to be created. To proceed,
select “Next”.

Figure 93, CAM Create Virtual Disks configuration

334 E200613-01-115-V14.0I-34
The next step is to select the pairs of disks that define a mirror pair. Do this by selecting a disk
from each of the leftmost boxes (Available Drives) and select “Add Drive Pair”. The mirror pairs
are going to be displayed in the boxes “Mirror Drive Pairs”.

Figure 94, CAM create Virtual Disks, specify mirror pairs

Figure 95, CAM Create Virtual Disks, specify mirror pairs

E200613-01-115-V14.0I-34 335
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 96, CAM Create Virtual Disks, specify mirror pairs

Figure 97, CAM Create Virtual Disks, specify mirror pairs

Choose the option “Create Volume”, type a name for the volume, maintain the “1” in the
“Number of Volmes to create:”, for “Volume Size” choose option “Fill One Virtual Disk…” and
assign “B” to “Controller”. Select next to proceed.

336 E200613-01-115-V14.0I-34
Figure 98, Create CAM Virtual Disks, configure volume

Choose option “Map to an Existing Host/Group or the Default Storage Domain”. Select next to
proceed.

Figure 99, CAM create Virtual Disks, specify volume mapping

E200613-01-115-V14.0I-34 337
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Choose the option that corresponds to your hostname and assign an available LUN. Select next
to proceed.

Figure 100, CAM create Virtual Disks, select Host or Host Group

Review you configuration and select “Finish”.

Figure 101, CAM create Virtual Disks, review configuration

338 E200613-01-115-V14.0I-34
The Virtual Disk created is displayed. It will take some time until it is initialized (in the column
“State” there is a message stating “Initializing…”).

Figure 102, CAM create Virtual Disks summary on Storage Systems

Now you have to wait until the Virtual Disks are being initialized. You can check the state in the
web page presented bellow.

NOTE:
There is a bug in the Sun StorageTek CAM software that sometimes presents a message
stating that the Virtual Disk was created with success, but, in fact, there is no Virtual
Disk created. In case you find this bug, please repeat the procedure to create the Virtual
Disk.

E200613-01-115-V14.0I-34 339
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 103, CAM create Virtual Disks summary on Storage Systems

For additional information about the Volumes, Mappings and Current Jobs, you can find it in the
CAM web pages presented bellow.

340 E200613-01-115-V14.0I-34
Figure 104, CAM Volume Summary on Storage Systems

Figure 105, CAM Mapping Summary on Storage Systems

E200613-01-115-V14.0I-34 341
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 106, CAM Current Job Summary on Storage Systems

 To verify that the StorageTek ST2540 virtual disks are properly configured, you
can go to the CAM software and navigate through “Storage Systems”, select
the array you want, and then finally select “Volumes”. For reference consult
Figure 104, CAM Volume Summary on Storage Systems.

 Reboot the machine.


# reboot

 After having finished the previous steps, run the following commands as user
root:

 Remove the SPOTS Performance Management V14.0 DVD.

 Install the patch p140101-* (where * is the latest release version of the patch).
This action is required only if the patch wasn’t already installed.

 Finally, run the script st2540.part.efi.sh.


# /var/2540/patch/st2540.part.efi.sh

 Ignore messages like the one presented bellow.

Corrupt label; wrong magic number


scsi: WARNING:
/pci@0,600000/pci@0/pci@8/SUNW,emlxs@0,1/fp@0,0/ssd@w202400a0b85ab858,
1f (ss d0):

 All the new drives are now available on the operating system.

 The StorageTek ST2540 is now fully configured.


Proceed to the Oracle Software Installation on Chapter 8, Installing Oracle Software if you
aren’t going to upgrade your hardware configurations.

342 E200613-01-115-V14.0I-34
Spots StorageTek Medium D Configuration

 This annex should only be used for the Medium D and Large D Configuration.
Server is a Sun SPARC Enterprise M3000.

This configuration is very similar to the Spots StorageTek Medium C Configuration. You can
find it in chapter Spots StorageTek Medium C Configuration, and it is very important reading
it in order to achieve the Spots StorageTek Medium D Configuration.

In Spots StorageTek Medium C Configuration the External Storage was composed only by the
Sun StorageTek St2540 Array. The Spots StorageTek Medium D Configuration is, basically, the
Medium C Configuration plus a Sun StorageTek 2501 Array Expansion Kit, which adds 12
external disks of 1TB of disk space capacity. They are intended for backup purposes only.

The differences in the configuration are in the steps of the configuration where the disks are
grouped to form each of the two volumes and in the configuration of the additional external
disks available for backup.

PHASE 1 – Creation of /spots_db[1..6] filesystems


Follow the procedures as presented in chapter Spots StorageTek Medium C Configuration,
but in the step of selecting the disks to form the first volume (Figure 82, CAM create Virtual
Disks configuration), select 6 disks from the ones that have 300GB of disk space.

The next steps are similar to Medium C Configuration. Follow the procedures shown in Figure
82, CAM create Virtual Disks configuration and following figures. Don’t forget that in this
configuration you have to select 3 mirror pairs. Having finished the creation of the first volume,
do the same to create the second volume.

 Reboot the machine.


# reboot

 After having finished the previous steps, run the following commands as user
root:

 Remove the SPOTS Performance Management V14.0 DVD.

 Install the patch p140101-* (where * is the latest release version of the patch).
This action is required only if the patch wasn’t already installed.

 Finally, run the script st2540.part.efi.sh.


# /var/2540/patch/st2540.part.efi.sh

 Ignore messages like the one presented bellow.

Corrupt label; wrong magic number


scsi: WARNING:
/pci@0,600000/pci@0/pci@8/SUNW,emlxs@0,1/fp@0,0/ssd@w202400a0b85ab858,
1f (ss d0):

 All the new drives are now available on the operating system.

E200613-01-115-V14.0I-34 343
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

PHASE 2 – Creation of /backup filesystem

 At the time of the writing of this procedure, it was not available the Sun Storedge 2501 for
development purposes. The procedure here described is similar to the one presented for
Spots StorageTek Medium C Configuration, so it can be used as a reference.

Follow the procedures as presented in Spots StorageTek Medium C Configuration, but in the
step of selecting the disks to form the first volume (Figure 82, CAM create Virtual Disks
configuration), select all the 12 disks remaining. If you have followed and performed the
configuration correctely, you have only 12 disks of 1TB available.

The next steps are similar to Medium C Configuration. Follow the procedures shown in Figure
82, CAM create Virtual Disks configuration and following figures. Don’t forget that in this
configuration you have to select 6 mirror pairs.

Another difference from the Medium C Configuration is that you’ll create only one volume. After
having done it, wait until the volume is being initialized. Consult Figure 106, CAM Current Job
Summary on Storage Systems to know how to monitor this process.

 The initialization of the volumes can take a very long time! The estimated time to initialize
the volumes is at least 8 hours.

 Reboot the machine, using the command presented bellow.


# reboot -- -r

 After having finished the previous steps, run the following commands as user
root:

 Remove the SPOTS Performance Management V14.0 DVD.

 Install the patch p140101-* (where * is the latest release version of the patch).
This action is required only if the patch wasn’t already installed.

 Finally, run the script st2501.part.efi.sh.


# /var/2501/st2501.part.efi.sh

 The StorageTek ST2540 and StorageTek ST2501 are now fully configured.
Proceed to the Oracle Software Installation on Chapter 8 if you aren’t going to upgrade your
hardware configurations.

344 E200613-01-115-V14.0I-34
Annex 8 – External Storage Setup for Large
Configuration

E200613-01-115-V14.0I-34 345
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Spots StorEdge Large A Configuration

 This annex should only be used for the Large Configuration A. Server is a Sun Fire
V445.

 Login as root user.

 Since you will need a second terminal for completing this process, you will need
to edit file /etc/default/login and comment the following line:
# If CONSOLE is set, root can only login on that device.
# Comment this line out to allow remote login by root.
#
CONSOLE=/dev/console

 Remember after the installation is done to uncomment this line again, since this is
a security hazard.

 Execute the following StorEdge Configuration CLI commands:


# sccli
sccli> show inquiry

If the Revision (Firmware version) of your SE3320 is 3.25S execute:


sccli> set drive-parameters auto-detect-swap-interval 60000
sccli> set drive-parameters polling-interval 30000
sccli> exit
#

If the Revision Firmware version) of your SE3320 is 4.13B or superior execute:


sccli> set drive-parameters auto-detect-swap-interval 60s
sccli> set drive-parameters polling-interval 30s
sccli> exit
#

 Execute the following commands:


# TERM=vt100
# export TERM
# tip –38400 /dev/ttyb

 The following window appears:

346 E200613-01-115-V14.0I-34
 You may have to refresh the screen by pressing CTRL+L
 The selection is done with ”Enter” button, confirmation is sometimes done with ”ESC

Figure 107, Interface for StorEdge 3320 Configuration

 Select “Terminal (VT100 Mode)”, and the following window appears:

Figure 108, Main Menu window

E200613-01-115-V14.0I-34 347
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Removing all Host Luns

 Select “view and edit Host luns”, and select “CHL 1 ID 0 (Primary controller)”:

Figure 109, Main Menu Channel selection

 Select the “LUN 0” by pressing the “Enter” key:

Figure 110, Main Menu Unmap LUN

 Choose “Yes”

 Removing remaining Host Luns if they exist. Hit the “Esc” key to exit the LUN
table for the Channel 1 (CHL 1).

 Using the arrow keys execute the same process for Channel 3 LUN (CHL 3),
see Figure 109, Main Menu Channel selection.

348 E200613-01-115-V14.0I-34
 After removing all the Luns from the StorEdge, hit the “ESC” key several times
until you are in the Main Menu window

Figure 111, Main Menu window

Removing all logical drives

 Using the arrow keys select “view and edit Logical drives”, and the following
window appears:

Figure 112, Logical Drives table

 Select the first logical drive “P0” by pressing the “Enter” key. The following
window will appear:

E200613-01-115-V14.0I-34 349
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 113, Actions for Logical Drives

 Move the cursor and select “Delete logical drive” by pressing the “Enter” key.

 Select “Yes” in the confirmation window.

 Now proceed by deleting the remaining drives, execute the same steps as for
the first logical-drive (see Figure 112, Logical Drives table, the configuration
provided in the table is just an example of a configuration that was done on the
StorEdge device and defers from the original one).

 The table of logical drives now appears empty.

350 E200613-01-115-V14.0I-34
Creating Logical Drives

 Select the first empty slot (using the “Enter” key):

Figure 114, Create Logical Drive confirmation

 Select “Yes”.

Figure 115, Raid level selection

 You will now be prompted to select the RAID type that is going to be used in
that logical drive. Select RAID 1.

E200613-01-115-V14.0I-34 351
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 116, Disk Selection

 Now, using ENTER key, select the disks that are going to be used in the logical
drive, select the first 12 disks of channel 0. After selecting the disks, hit ESC
key twice to confirm.

Figure 117, Logical Drive Creation confirmation

 Make sure that stripe size is set to 128KB. Confirm the creation of the logical
drive selecting the “Yes” option. Some notice messages related to the logical
drive may appear. Hit the ESC key in all of them and until you return to the
logical drive configuration menu.

 Go back and create the second logical drive using a similar procedure (RAID
1).

352 E200613-01-115-V14.0I-34
Figure 118, Second logical drive creation

Figure 119, Second logical drive disk selection

 Using the ENTER key, select the disks that are going to be used in the logical
drive. Select the remaining 12 disks from channel 2.

E200613-01-115-V14.0I-34 353
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 120, Secondary controller assignment

 Hit ESC and select “Logical Drive Assignments”, to assign this logical drive to
the secondary controller. Select “Yes”. Set stripe size to 128KB.

Figure 121, Logical drive creation

 Hit ESC once more and then select “Yes” to confirm the creation of the logical
drive. You will need to wait until the logical drives are available to create the
new host luns, after the creation of the logical drives, two popup windows will
appear stating the each logical drive was created, hit ESC key in both cases.
Hit the Esc key until you are in the main menu.

354 E200613-01-115-V14.0I-34
Creating Host LUN maps

 Select view and edit host luns, and select channel 1.

Figure 122, Channel 1 Selection

 Select “Logical Drive”

Figure 123, Selecting the first empty slot.

 Hit Enter to select the first available slot.

E200613-01-115-V14.0I-34 355
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 124, Logical Drive selection

 Select the first logical drive and hit ENTER key twice. Hit Enter to map the Host
Lun.

Figure 125, Map Host Lun confirmation

 Now go back, select channel 3 and assign the second lun to the remaining
logical drive in channel 3 repeating the same procedure.

356 E200613-01-115-V14.0I-34
Figure 126, Second Host Lun confirmation

Figure 127, Main Menu

 After having finished the previous steps, run the following commands as user
root:
# update_drv –f sd
(…)
# devfsadm

E200613-01-115-V14.0I-34 357
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Remove the SPOTS Performance Management V14.0 DVD.


 Insert the SPOTS Patches DVD

 Install patch p140001-* (where * is the latest release version in the patch
DVD, if it wasn’t already installed).

# /var/3320/patch/3320-ee.part.ksh

 All the new drives are now available on the operating system.

 Remove the SPOTS Patches DVD.


 Insert the SPOTS Performance Management V14.0 DVD.

 Still as root, issue the following command:


# /cdrom/cdrom0/storedge/stor.chg.cron.sh

 Using vi command, edit /etc/spots.ss3320.conf.email and replace the address’s for


the users that will receive notifications in case of hard disk failure.

 To verify that the StorEdge 3320 is properly configured, issue the following
command for the diferent configurations:
# sccli
(...)
sccli> show ld

 You should get an output similar to the following:

LD LD-ID Size Assigned Type Disks Spare Failed Status


-------------------------------------------------------------------------------------------------------------------------------
ld0 5E374C2C 876GB Primary RAID1 12 0 0 Good
ld1 68EC876B 876GB Secondary RAID1 12 0 0 Good

 Please check the values of the following columns: Size, Assigned, Type, Disks, Spare,
Failed and Status.

 To quit the StorEdge Command Line Interface issue the folllowing command:
sccli> exit

 The StorEdge 3320 is now fully configured.


Proceed to the Oracle Software Installation on Chapter 8 if you aren’t going to upgrade your
hardware configurations.

358 E200613-01-115-V14.0I-34
Spots StorEdge Large B Configuration

 This annex should only be used for the Large Configuration B. Server is a Sun Fire
V490.

 Login as root user.

 Since you will need a second terminal for completing this process, you will need
to edit file /etc/default/login and comment the following line:
# If CONSOLE is set, root can only login on that device.
# Comment this line out to allow remote login by root.
#
CONSOLE=/dev/console

 Remember after the installation is done to uncomment this line again, since this is
a security hazard.

 Execute the following StorEdge Configuration CLI commands:


# sccli
sccli> show inquiry

If the Revision (Firmware version) of your SE3320 is 3.25S execute:


sccli> set drive-parameters auto-detect-swap-interval 60000
sccli> set drive-parameters polling-interval 30000
sccli> exit
#

If the Revision Firmware version) of your SE3320 is 4.13B or superior execute:


sccli> set drive-parameters auto-detect-swap-interval 60s
sccli> set drive-parameters polling-interval 30s
sccli> exit
#

 Since the V490 doesn’t come with a normal rs232 serial port, for the following
steps a standard Windows pc with a serial port is needed, refer to annex 11,
and after the connection has been established return to this chapter.

 A similar window will appear (the screenshots bellow where taken using Solaris and the
interior results from the HyperTerminal window are the same):
 You may have to refresh the screen by pressing CTRL+L

E200613-01-115-V14.0I-34 359
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 The selection is done with ”Enter” button, confirmation is sometimes done with ”ESC

Figure 128, Interface for StorEdge 3320 Configuration

 Select “Terminal (VT100 Mode)”, and the following window appears:

Figure 129, Main Menu window


Removing all Host Luns

 Select “view and edit Host luns”, and select “CHL 1 ID 0 (Primary controller)”:

360 E200613-01-115-V14.0I-34
Figure 130, Main Menu Channel selection

 Select the “LUN 0” by pressing the “Enter” key:

Figure 131, Main Menu Unmap LUN

 Choose “Yes”

 Removing remaining Host Luns if they exist. Hit the “Esc” key to exit the LUN
table for the Channel 1 (CHL 1).

 Using the arrow keys execute the same process for Channel 3 LUN (CHL 3),
see Figure 130, Main Menu Channel selection.

 After removing all the Luns from the StorEdge, hit the “ESC” key several times
until you are in the Main Menu window

E200613-01-115-V14.0I-34 361
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 132, Main Menu window

362 E200613-01-115-V14.0I-34
Removing all logical drives

 Using the arrow keys select “view and edit Logical drives”, and the following
window appears:

Figure 133, Logical Drives table

 Select the first logical drive “P0” by pressing the “Enter” key. The following
window will appear:

Figure 134, Actions for Logical Drives

 Move the cursor and select “Delete logical drive” by pressing the “Enter” key.

 Select “Yes” in the confirmation window.

 Now proceed by deleting the remaining drives, execute the same steps as for
the first logical-drive (see Figure 133, Logical Drives table, the configuration

E200613-01-115-V14.0I-34 363
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

provided in the table is just an example of a configuration that was done on the
StorEdge device and defers from the original one).

 The table of logical drives now appears empty.

Creating Logical Drives

 Select the first empty slot (using the “Enter” key):

Figure 135, Create Logical Drive confirmation

 Select “Yes”.

Figure 136, Raid level selection

364 E200613-01-115-V14.0I-34
 You will now be prompted to select the RAID type that is going to be used in
that logical drive. Select RAID 1.

Figure 137, Disk Selection

 Now, using ENTER key, select the disks that are going to be used in the logical
drive, select the first 12 disks of channel 0. After selecting the disks, hit ESC
key twice to confirm.

Figure 138, Logical Drive Creation confirmation

 Make sure that stripe size is set to 128KB. Confirm the creation of the logical
drive selecting the “Yes” option. Some notice messages related to the logical

E200613-01-115-V14.0I-34 365
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

drive may appear. Hit the ESC key in all of them and until you return to the
logical drive configuration menu.

 Go back and create the second logical drive using a similar procedure (RAID
1).

Figure 139, Second logical drive creation

Figure 140, Second logical drive disk selection

 Using the ENTER key, select the disks that are going to be used in the logical
drive. Select the remaining 12 disks from channel 2.

366 E200613-01-115-V14.0I-34
Figure 141, Secondary controller assignment

 Hit ESC and select “Logical Drive Assignments”, to assign this logical drive to
the secondary controller. Select “Yes”. Set stripe size to 128KB.

Figure 142, Logical drive creation

 Hit ESC once more and then select “Yes” to confirm the creation of the logical
drive. You will need to wait until the logical drives are available to create the
new host luns, after the creation of the logical drives, two popup windows will

E200613-01-115-V14.0I-34 367
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

appear stating the each logical drive was created, hit ESC key in both cases.
Hit the Esc key until you are in the main menu.
Creating Host LUN maps

 Select view and edit host luns, and select channel 1.

Figure 143, Channel 1 Selection

 Select “Logical Drive”

Figure 144, Selecting the first empty slot.

 Hit Enter to select the first available slot.

368 E200613-01-115-V14.0I-34
Figure 145, Logical Drive selection

 Select the first logical drive and hit ENTER key twice. Hit Enter to map the Host
Lun.

Figure 146, Map Host Lun confirmation

 Now go back, select channel 3 and assign the second lun to the remaining
logical drive in channel 3 repeating the same procedure.

E200613-01-115-V14.0I-34 369
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 147, Second Host Lun confirmation

Figure 148, Main Menu

 After having finished the previous steps, run the following commands as user
root:
# update_drv –f sd
(…)

370 E200613-01-115-V14.0I-34
# devfsadm

 Remove the SPOTS Performance Management V14.0 DVD.

 Insert the SPOTS Patches DVD.

 Install patch p140101-* (where * is the latest release version in the patch DVD,
if it wasn’t already installed).

# /var/3320/patch/3320-ee.part.v490.ksh

 All the new drives are now available on the operating system.

 Remove the SPOTS Patches DVD.

 Insert the SPOTS Performance Management V14.0 DVD.

 Still as root, issue the following command:


# /cdrom/cdrom0/storedge/stor.chg.cron.sh

 Using vi command, edit /etc/spots.ss3320.conf.email and replace the address’s for


the users that will receive notifications in case of hard disk failure.

 To verify that the StorEdge 3320 is properly configured, issue the following
command for the diferent configurations:
# sccli
(...)
sccli> show ld

 You should get an output similar to the following:

LD LD-ID Size Assigned Type Disks Spare Failed Status


-------------------------------------------------------------------------------------------------------------------------------
ld0 5E374C2C 876GB Primary RAID1 12 0 0 Good
ld1 68EC876B 876GB Secondary RAID1 12 0 0 Good

 Please check the values of the following columns: Size, Assigned, Type, Disks, Spare,
Failed and Status.

 To quit the StorEdge Command Line Interface issue the folllowing command:
sccli> exit

 The StorEdge 3320 is now fully configured.


Proceed to the Oracle Software Installation on Chapter 8 if you aren’t going to upgrade your
hardware configurations.

E200613-01-115-V14.0I-34 371
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Spots StorageTek Large C Configuration

 This annex should only be used for the Large C Configuration. Server is a Sun
SPARC Enterprise M3000.

This configuration is very similar to the Spots StorageTek Medium C Configuration. You can
find it in Spots StorageTek Medium C Configuration, and it is very important reading it in
order to achieve the Spots StorageTek Large C Configuration.

In Spots StorageTek Medium C Configuration the External Storage was composed only by the
Sun StorageTek St2540 Array. The Spots StorageTek Large C Configuration is, basically, the
Medium C Configuration plus a JBOD, which increases the number of external disks from 12 to
24, and doubles the available disk space capacity.

The differences in the configuration are in the steps of the configuration where the disks are
grouped to form each of the two volumes.

Follow the procedures as presented in Spots StorageTek Medium C Configuration, but in the
step of selecting the disks to form the first volume (Figure 81, CAM create Virtual Disks
configuration), instead of selecting 6 disks, select 12. This will imply to have 6 mirror pairs,
and because of that the following step is also different from the presented in the Annex 6.
Taking this in consideration, follow the procedures shown in Figure 82, CAM create Virtual
Disks configuration) and following figures. Don’t forget that in this configuration you have to
select 6 mirror pairs. Having finished the creation of the first volume, do the same to create the
second volume.

 Reboot the machine.


# reboot

 After having finished the previous steps, run the following commands as user
root:

 Remove the SPOTS Performance Management V14.0 DVD.

 Install the patch p140101-* (where * is the latest release version of the patch).
This action is required only if the patch wasn’t already installed.

 Finally, run the script st2540.part.efi.sh.


# /var/2540/patch/st2540.part.efi.sh

 Ignore messages like the one presented bellow.

Corrupt label; wrong magic number


scsi: WARNING:
/pci@0,600000/pci@0/pci@8/SUNW,emlxs@0,1/fp@0,0/ssd@w202400a0b85ab858,
1f (ss d0):

 All the new drives are now available on the operating system.

372 E200613-01-115-V14.0I-34
 The StorageTek ST2540 is now fully configured.
Proceed to the Oracle Software Installation on Chapter 8 if you aren’t going to upgrade your
hardware configurations.

E200613-01-115-V14.0I-34 373
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Spots StorageTek Large D Configuration

 This annex should only be used for the Medium D and Large D Configuration.
Server is a Sun SPARC Enterprise M3000.

The Large D configuration is equal to the Medium D in terms of external storage. This means
that the whole procedure of configurations to be followed is the same. Consult Spots
StorageTek Medium D Configuration.

374 E200613-01-115-V14.0I-34
Annex 9 – StorEdge 3320 setup for Medium
Legacy Configuration

E200613-01-115-V14.0I-34 375
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 This annex should only be used for the Medium Legacy Configuration.

 Login as root user.

 Since you will need a second terminal for completing this process, you will need
to edit file /etc/default/login and comment the following line:
# If CONSOLE is set, root can only login on that device.
# Comment this line out to allow remote login by root.
#
CONSOLE=/dev/console

 Remember after the installation is done to uncomment this line again, since this is
a security hazard.

 Execute the following StorEdge Configuration CLI commands:


# sccli
sccli> show inquiry

If the Revision (Firmware version) of your SE3320 is 3.25S execute:


sccli> set drive-parameters auto-detect-swap-interval 60000
sccli> set drive-parameters polling-interval 30000
sccli> exit
#

If the Revision Firmware version) of your SE3320 is 4.13B or superior execute:


sccli> set drive-parameters auto-detect-swap-interval 60s
sccli> set drive-parameters polling-interval 30s
sccli> exit
#

 Execute the following commands:


# TERM=vt100
# export TERM
# tip –38400 /dev/ttyb

 The following window appears:


 You may have to refresh the screen by pressing CTRL+L
 The selection is done with ”Enter” button, confirmation is sometimes done with ”ESC

376 E200613-01-115-V14.0I-34
Figure 149, Interface for StorEdge 3320 Configuration

 Select “Terminal (VT100 Mode)”, and the following window appears:

Figure 150, Main Menu window

E200613-01-115-V14.0I-34 377
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Removing all Host Luns

 Select “view and edit Host luns”, and select “CHL 1 ID 0 (Primary controller)”:

Figure 151, Main Menu Channel selection

 Select the “LUN 0” by pressing the “Enter” key:

Figure 152, Main Menu Unmap LUN

 Choose “Yes”

 Removing remaining Host Luns if they exist. Hit the “Esc” key to exit the LUN
table for the Channel 1 (CHL 1).

 Using the arrow keys execute the same process for Channel 3 LUN (CHL 3)

378 E200613-01-115-V14.0I-34
 After removing all the Luns from the StorEdge, hit the “ESC” key several times
until you are in the Main Menu window

Figure 153, Main Menu window

Removing all logical drives

 Using the arrow keys select “view and edit Logical drives”, and the following
window appears:

Figure 154, Logical Drives table

 Select the first logical drive “P0” by pressing the “Enter” key. The following
window will appear:

E200613-01-115-V14.0I-34 379
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 155, Actions for Logical Drives

 Move the cursor and select “Delete logical drive” by pressing the “Enter” key.

 Select “Yes” in the confirmation window.

 Now proceed by deleting the remaining drives, execute the same steps as for
the first logical-drive.

 The table of logical drives now appears empty.

380 E200613-01-115-V14.0I-34
Creating Logical Drives

 Select the first empty slot (using the “Enter” key):

Figure 156, Create Logical Drive confirmation

 Select “Yes”.

Figure 157, Raid level selection

 You will now be prompted to select the RAID type that is going to be used in
that logical drive. Select RAID 1.

E200613-01-115-V14.0I-34 381
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 158, Disk Selection

 Now, using ENTER key, select the disks that are going to be used in the logical
drive, select disks 0 and 5 in channel 0 and disks 0 and 5 in channel 2. After
selecting the disks, hit ESC key twice to confirm.

Figure 159, Logical Drive Creation confirmation

382 E200613-01-115-V14.0I-34
 Confirm the creation of the logical drive selecting the “Yes” option. Some notice
messages related to the logical drive may appear. Hit the ESC key in all of them
and until you return to the logical drive configuration menu.

 Create the second logical drive using RAID5.

Figure 160, Second logical drive creation

Figure 161, Second logical drive disk selection

E200613-01-115-V14.0I-34 383
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Using the ENTER key, select the disks that are going to be used in the logical
drive. Select disks 1, 2, 3 and 4 in channel 0. After selecting the disks hit ESC
key twice and select “Yes” to confirm the creation of the logical drive.

 Repeat once more the logical drive creation procedure with the following
exceptions:
• Select RAID5
• Select disks 1, 2, 3 and 4 in channel 2 (remaining disks).

Figure 162, Redundant controller assignment

 Before creating the logical drive assign it to the secondary controller, by


selecting “Logical Drive Assignments”, Select YES and Esc Key to confirm. Hit
ESC key to remove the informative popup windows that appears.

You will need to wait until the three logical drives are available to create the new host luns, after
the creation of the logical drives, two popup windows will appear stating the each logical drive
was created, hit ESC key in both cases. Hit the Esc key until you are in the main menu.

Creating Host LUN maps

384 E200613-01-115-V14.0I-34
Figure 163, Channel 1 Selection

 Select view and edit host luns, and select channel 1.

Figure 164, Logical Drive Selection

 Select “Logical Drive”

E200613-01-115-V14.0I-34 385
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 165, Selecting the first empty slot

 Hit Enter to select the first available slot.

Figure 166, Logical Drive selection

 Select the first logical drive and hit ENTER key twice.

386 E200613-01-115-V14.0I-34
Figure 167, Map Host Lun confirmation

 Hit Enter to map the Host Lun.

Figure 168, Host Lun creation confirmation

 Now select the second slot and repeat the same procedure to map the other
logical drive (RAID 5) in channel 1.

E200613-01-115-V14.0I-34 387
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 169, Second host lun configuration

 After the procedure is complete for the second logical drive (RAID5) in the
channel 1, go back , select channel 3 and assign the lun to the only logical drive
in channel 3 (RAID5 also)

Figure 170, Third host lun configuration

You can verify the status in the main menu. (Press ESC until you reach it)

388 E200613-01-115-V14.0I-34
Figure 171, Main Menu

 After having finished the previous steps, run the following commands as user
root:
# update_drv –f sd
(…)
# devfsadm
# /cdrom/cdrom0/storedge/3320.part.legacy.ksh

 All the new drives are now available on the operating system.

 Still as root, issue the following command:


# /cdrom/cdrom0/storedge/stor.chg.cron.sh

 Using vi command, edit /etc/spots.ss3320.conf.email and replace the address’s for


the users that will receive notifications in case of hard disk failure.

 To verify that the StorEdge 3320 is properly configured, issue the following
command for the diferent configurations:
# sccli
(...)
sccli> show ld

 You should get an output similar to the following:

LD LD-ID Size Assigned Type Disks Spare Failed Status


-------------------------------------------------------------------------------------------------------------------------------

E200613-01-115-V14.0I-34 389
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

ld0 3ADD9DE1 204.35GB Primary RAID5 4 0 0 Good


ld1 1F7D914F 204.35GB Secondary RAID5 4 0 0 Good
ld2 5BA1E8E9 136.23GB Primary RAID1 4 0 0 Good
 Please check the values of the following columns: Size, Assigned, Type, Disks, Spare,
Failed and Status.

 To quit the StorEdge Command Line Interface issue the folllowing command:
sccli> exit

 The StorEdge 3320 is now fully configured.


Proceed to the Oracle Software Installation on Chapter 8 if you aren’t going to upgrade your
hardware configurations.

390 E200613-01-115-V14.0I-34
Annex 10 – StorEdge 3320 setup for Large
Legacy Configuration

E200613-01-115-V14.0I-34 391
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 This annex should only be used for the Large Legacy Configuration.

 Login as root user.

 Since you will need a second terminal for completing this process, you will need
to edit file /etc/default/login and comment the following line:
# If CONSOLE is set, root can only login on that device.
# Comment this line out to allow remote login by root.
#
CONSOLE=/dev/console

 Remember after the installation is done to uncomment this line again, since this is
a security hazard.

 Execute the following StorEdge Configuration CLI commands:


# sccli
sccli> show inquiry

If the Revision (Firmware version) of your SE3320 is 3.25S execute:


sccli> set drive-parameters auto-detect-swap-interval 60000
sccli> set drive-parameters polling-interval 30000
sccli> exit
#

If the Revision Firmware version) of your SE3320 is 4.13B or superior execute:


sccli> set drive-parameters auto-detect-swap-interval 60s
sccli> set drive-parameters polling-interval 30s
sccli> exit
#

 Execute the following commands:


# TERM=vt100
# export TERM
# tip –38400 /dev/ttyb

 The following window appears:


 You may have to refresh the screen by pressing CTRL+L
 The selection is done with ”Enter” button, confirmation is sometimes done with ”ESC

392 E200613-01-115-V14.0I-34
Figure 172, Interface for StorEdge 3320 Configuration

 Select “Terminal (VT100 Mode)”, and the following window appears:

Figure 173, Main Menu window

E200613-01-115-V14.0I-34 393
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Removing all Host Luns

 Select “view and edit Host luns”, and select “CHL 1 ID 0 (Primary controller)”:

Figure 174, Main Menu Channel selection

 Select the “LUN 0” by pressing the “Enter” key:

Figure 175, Main Menu Unmap LUN

 Choose “Yes”

 Removing remaining Host Luns if they exist. Hit the “Esc” key to exit the LUN
table for the Channel 1 (CHL 1).

 Using the arrow keys execute the same process for Channel 3 LUN (CHL 3).

394 E200613-01-115-V14.0I-34
 After removing all the Luns from the StorEdge, hit the “ESC” key several times
until you are in the Main Menu window

Figure 176, Main Menu window

Removing all logical drives

 Using the arrow keys select “view and edit Logical drives”, and the following
window appears:

Figure 177, Logical Drives table

 Select the first logical drive “P0” by pressing the “Enter” key. The following
window will appear:

E200613-01-115-V14.0I-34 395
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 178, Actions for Logical Drives

 Move the cursor and select “Delete logical drive” by pressing the “Enter” key.

 Select “Yes” in the confirmation window.

 Now proceed by deleting the remaining drives, execute the same steps as for
the first logical-drive.

 The table of logical drives now appears empty.

Creating Logical Drives

 Select the first empty slot (using the “Enter” key):

Figure 179, Create Logical Drive confirmation

396 E200613-01-115-V14.0I-34
 Select “Yes”.

Figure 180, Raid level selection

 You will now be prompted to select the RAID type that is going to be used in
that logical drive. Select RAID 1.

Figure 181, Disk Selection

E200613-01-115-V14.0I-34 397
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

 Now, using ENTER key, select the disks that are going to be used in the logical
drive, select disks 0, 1, 2, 3, 4, 11, 12 and 13 in channel 0 and disks 12 and 13
in channel 2. After selecting the disks, hit ESC key twice to confirm.

Figure 182, Logical Drive Creation confirmation

 Confirm the creation of the logical drive selecting the “Yes” option. Some notice
messages related to the logical drive may appear. Hit the ESC key in all of them
and until you return to the logical drive configuration menu.

 Go back and create the second logical drive using a similar procedure (RAID
1).

398 E200613-01-115-V14.0I-34
Figure 183, Second logical drive creation

Figure 184, Second logical drive disk selection

 Using the ENTER key, select the disks that are going to be used in the logical
drive. Select disks 0, 1, 2, 3, 4 and 11 in channel 2.

E200613-01-115-V14.0I-34 399
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 185, Secondary controller assignment

 Hit ESC and select “Logical Drive Assignments”, to assign this logical drive to
the secondary controller. Select “Yes”

Figure 186, Logical drive creation

 Hit ESC once more and then select “Yes” to confirm the creation of the logical
drive.

400 E200613-01-115-V14.0I-34
 Repeat once more the logical drive creation procedure to create 2 more logical
drives with the following parameters
• RAID1 – Disks 5, 8, 9, 10 in channel 0 – Primary controller
• RAID1 – Disks 5, 8, 9, 10 in channel 2 – Secondary controller

Figure 187, Final state of the four Logical Drive creation processes
You will need to wait until the three logical drives are available to create the new host luns, after
the creation of the logical drives, two popup windows will appear stating the each logical drive
was created, hit ESC key in both cases. Hit the Esc key until you are in the main menu.

E200613-01-115-V14.0I-34 401
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Creating Host LUN maps

Figure 188, Channel 1 Selection

 Select view and edit host luns, and select channel 1.

Figure 189, Logical Drive Selection

 Select “Logical Drive”

402 E200613-01-115-V14.0I-34
Figure 190, Selecting the first empty slot.

 Hit Enter to select the first available slot.

Figure 191, Logical Drive selection

 Select the first logical drive and hit ENTER key twice.

E200613-01-115-V14.0I-34 403
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 192, Map Host Lun confirmation

 Hit Enter to map the Host Lun.

Figure 193, Host Lun creation confirmation

 Now select the second slot and repeat the same procedure to map the other
logical drive in channel 1.

404 E200613-01-115-V14.0I-34
Figure 194, Second host lun configuration

 After the procedure is complete for the second logical drive in the channel 1, go
back , select channel 3 and assign the two luns to the remain two logical drives
in channel 3

Figure 195,Third host lun configuration

You can verify the status in the main menu. (Press ESC until you reach it)

E200613-01-115-V14.0I-34 405
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 196, Main Menu

 After having finished the previous steps, run the following commands as user
root:
# update_drv –f sd
(…)
# devfsadm
# /cdrom/cdrom0/storedge/3320-ee.part.legacy.ksh

 All the new drives are now available on the operating system.

 Still as root, issue the following command:


# /cdrom/cdrom0/storedge/stor.chg.cron.sh

 Using vi command, edit /etc/spots.ss3320.conf.email and replace the address’s for


the users that will receive notifications in case of hard disk failure.

 To verify that the StorEdge 3320 is properly configured, issue the following
command for the diferent configurations:
# sccli
(...)
sccli> show ld

 You should get an output similar to the following:

LD LD-ID Size Assigned Type Disks Spare Failed Status


-------------------------------------------------------------------------------------------------------------------------------
ld0 5E374C2C 340.58GB Primary RAID1 10 0 0 Good
ld1 68EC876B 204.35GB Secondary RAID1 6 0 0 Good

406 E200613-01-115-V14.0I-34
ld2 78B9CCC0 136.23GB Primary RAID1 4 0 0 Good
ld3 24A78FFB 136.23GB Secondary RAID1 4 0 0 Good

 Please check the values of the following columns: Size, Assigned, Type, Disks, Spare,
Failed and Status.

 To quit the StorEdge Command Line Interface issue the folllowing command:
sccli> exit

 The StorEdge 3320 is now fully configured.


Proceed to the Oracle Software Installation on Chapter 8 if you aren’t going to upgrade your
hardware configurations.

E200613-01-115-V14.0I-34 407
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Annex 11 – Setting up LDAP client in Solaris

408 E200613-01-115-V14.0I-34
When SPOTS server is part in an LDAP environment, some configuration steps are required to
setup the server as an LDAP Client. Basically it is necessary to edit some configuration files
and to initialize access to the LDAP Server.
However, it is not part of the Installation manual or any SPOTS documentation to depict the
several ways on how to setup up an LDAP Server. So, before installing SPOTS application, the
server must be an active LDAP client with users and groups required by SPOTS already pre-
configured.

This setup procedure is just an example and for more detailed informations please refer to your
LDAP documentation.

Enabling and initializing LDAP Client

 To enable the local LDAP Client in Solaris 10 run the following command as user root:
# svcadm enable svc:/network/ldap/client:default

 Now to initialize the LDAP Client run as user root:


#/usr/sbin/ldapclient –v init \
-a proxyDN=<cn>=<proxyagent>,ou=<profile>,dc=<example>,dc=<nsn>,dc=<pt> \
-a domainName=<example.nsn.pt> \
-a profileName=<default> \
-a proxyPassword=<password> \
<IP_Adress_LDAP_Server>

All the fields within < > must be filled with the parameters/values which were used during LDAP
Server configuration. The command will setup local configuration files automatically to reflect
the initialization parameters.

Configure /etc/nsswitch.conf

 During the last step the nsswitch.conf was modified and needs now some modification. It
is necessary to remove the [NOTFOUND=return] flag from each line in nsswitch.conf.
This will allow to reference static file entries to localhost, but are missing from the
directory. Example:
hosts: ldap [NOTFOUND=return] files

 The nsswitch.conf should look like:


# cat /etc/nsswitch.conf
#
# /etc/nsswitch.ldap:
#
# An example file that could be copied over to /etc/nsswitch.conf; it
# uses LDAP in conjunction with files.
#
# "hosts:" and "services:" in this file are used only if the
# /etc/netconfig file has a "-" for nametoaddr_libs of "inet"
transports.

# LDAP service requires that svc:/network/ldap/client:default be


enabled

E200613-01-115-V14.0I-34 409
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

# and online.

# the following two lines obviate the "+" entry in /etc/passwd and
/etc/group.
passwd: files ldap
group: files ldap

# consult /etc "files" only if ldap is down.


hosts: files ldap dns

# Note that IPv4 addresses are searched for in all of the ipnodes
databases
# before searching the hosts databases.
ipnodes: files ldap

networks: files ldap


protocols: files ldap
rpc: files ldap
ethers: files ldap
netmasks: files ldap
bootparams: files ldap
publickey: files ldap

netgroup: ldap

automount: files ldap


aliases: files ldap

# for efficient getservbyname() avoid ldap


services: files ldap

printers: user files ldap

auth_attr: files ldap


prof_attr: files ldap

project: files ldap

 Refresh Name Service Cache Daemon after repairing /etc/nsswitch.conf


# /etc/init.d/nscd stop
# /etc/init.d/nscd start

410 E200613-01-115-V14.0I-34
Annex 12 – Configuring the RS-232 Serial
Port Connection

E200613-01-115-V14.0I-34 411
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

The RS-232 COM (serial) port on either controller module can be used to configure and monitor
the RAID array using the controller firmware. It can be connected to a VT100 terminal, terminal
emulation program, terminal server, or the serial port of a server.
Note - When you connect through a serial port connection, you might need to refresh the
screen to display the RAID firmware Main Menu properly. Press Ctrl-L to refresh the screen.

Use a null modem serial cable to connect the COM port of the RAID array to an unused serial
port on your host system.

Note - A DB9-to-DB25 serial cable adapter is included in your package contents to connect the
serial cable to a DB25 serial port on your host if you do not have a DB9 serial port.

2. Power up the array.


3. On the Windows 200x server (a Windows 2000 server was used in this example but
Windows XP, and Windows 2003 can also be used), select Start Programs Accessories
Communications HyperTerminal.
4. Type a name and choose an icon for the connection.
5. In the Connect To window, choose the COM port from the Connect Using: drop-down menu
that is connected to the array.

Figure 197, HyperTerminal configuration

6. Click OK.

7. In the Properties window, set the serial port parameters using the drop-down menus.

• 38400 baud

• 8 bit

• 1 stop bit

• Parity: None

• Flow control: None

412 E200613-01-115-V14.0I-34
Figure 198, Connection properties
8. To save the connection and its settings, select File Save. The connection filename is
connection_name where connection_name is the name you gave this HyperTerminal
connection when you created it.
9. To make a connection shortcut on your desktop, select Start Find For Files or
Folders. Enter the connection_name and click the Search Now button. Highlight and right-
click on the filename in the Search Results window, select Create Shortcut, and click Yes.
10. Now return to the configuration of the array:
a. Spots StorEdge Medium B Configuration
b. Spots StorEdge Large B Configuration

E200613-01-115-V14.0I-34 413
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Annex 13 – Sun SPARC Enterprise M3000


Server Spots Installation

414 E200613-01-115-V14.0I-34
Post installation tasks Spots PMS Distributed Configuration

While doing the post installation tasks make sure that these steps are done:

 As root, stop SPOTS Services on AS and DS:

(on the AS) # /etc/init.d/initSpots stop


(on the DS) # /etc/init.d/initSpots stop

 As the spots user, on the Aplication server edit the file /opt/spots-pms/sas.cfg and add
the following line:
NamingServerHost=db

 As the spots user, on the database server, Edit file /opt/spots-pms/domains.cfg and alter
from ip address to the Database name db:
domain Root DS@10.46.18.230;
change to:
domain Root DS@db;

 As the spots user, copy /opt/spots-pms/domains.cfg file from the AS to the DS.

 As the spots user, copy /opt/spots-pms/sdb.cfg file from the DS to the AS.

 As the spots user, on the machine where the PMC was installed (in this case AS), edit file
/opt/spots-pmc/conf/spots_configuration.properties and do the changes mentioned in the IG
(reference 9.7.1 Configuring a Distributed SPOTS Environment with Real-Time).

 As the spots user, on the machine where the AS was installed, edit the file
MonitorServer.properties and do the changes mentioned in the IG (reference 9.7.1 Configuring
a Distributed SPOTS Environment with Real-Time).

 As root, start Spots Services First on the DS and then on the AS

DS
# /etc/init.d/initSpots start

AS
# /etc/init.d/initSpots start

At this point the user needs to proceed with the Spots System Configuration in chapter 9.5 -
System configuration issues.

E200613-01-115-V14.0I-34 415
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Annex 14 – Sun SPARC Enterprise M3000/


M4000 XSCF

416 E200613-01-115-V14.0I-34
After having performed the installation and configurations described in the documentation of the
M3000/ M4000 hardware (Sun SPARC Enterprise M3000/ M4000 Server Getting Started
Guide), you can now access the XSCF.

When connecting to the XSCF through a KVM switch you might need to hit the enter key
several times before the XSCF appears:

Figure 199, Connection to the XSCF through a switch

 Type you user account login and password given to you by the Lab Manager. After being
logged in, change from the XSCF to the Ok prompt by typing the following command:
console –d 0

 Insert Solaris 10 10/08 Software DVD and then boot the machine.
boot cdrom

E200613-01-115-V14.0I-34 417
Nokia Siemens Networks S.A Installation Guide (SPOTS V14.0)

Figure 200, Switch from XSCF to OK prompt and boot from CDROM

Return to Chapter 5, Installing SUN Solaris 10, Select English as the Solaris Installer
language:.

418 E200613-01-115-V14.0I-34

You might also like