You are on page 1of 102

1) Soft Handover Overhead is high

Soft Handover Overhead is higher than 45% in RNC, the value cant meet KPI request, customer ask to optimize SHO overhead.

Check cell coverage for improving overshooting and reducing SHO overhead with iNastar, we find some cells coverage to larger, and then ask to customer to down antenna tilt of those cells.

some value of parameters are different HWs recommend value, particularly TrigTime1A (1A Time to trigger) still using NSNs setting, after swap NSN network 2 years.

After changing TrigTime1A = D320 on Oct. 9th

2) PS CDR reduced due to inactivity timer opt.


PS DCR was improved after 10/11 due to change PS inactive timer (10sec -> 5sec))

SET UPSINACTTIMER

PsInactTmrForCon PsInactTmrForStr PsInactTmrForInt PsInactTmrForBac Meaning: When detecting that the Ps' User had no data to transfer for a long time which longer than this timer, the PDCP layer would request the RRC layer to release this Radio Access Bear. So the number of normal release will increase which will result in decreasing the PS CDR = Abnormal Release / Abnormal Release + Normal Release

1) External Interference
We found the KPI for Our site is not good, and the RTWP for all cells are very High. We check the RTWP for Site New Sites GHB968:

We make a trace for RF Frequency Scaning by which we are confermed that there is some

External

Interference

After This we conferm that there is some External Interference in our Network, so we just inform to our coustomer to make it clear. Always check the results for surrounding sites , if you are suspecting Interference.

1) Optimize PS RB Setup timer


PS Drops are very high at RNC After investigations we found a lot of Ps Drops due to coverage, SRB, TRB Resets and UU No Reply RbSetup Wait RB setup RspTmr response timer

Meaning: A timer to RNC wait for the RB setup response from UE in the RB procedure. Refer to the No RB reconfiguration message may retransmit three times when the timer expires.The parameter modific has no impact on the equipment. GUI Value Range: 300~300000 Unit: ms Actual Value Range: 300~300000 MML Default Value: None Recommended Value: 5000 Parameter Relationship: None Service Interrupted After Modification : No (No impact on the UE in idle mode) Impact on Network Performance: None

So what's recommended is as below:

3) High RTWP Due to Micro Wave Interference


New 3G NodeB has completed integration, RTWP was very high. This site was 2G and 3G collocation site,before GSM is 1800M band, now UMTS is 2100M From M2000 we got the RTWP value, the top sector 2 RTWP value was -80, sector 1 and sector 3 were more than -100, it was serious problem. We did some work for this site below: 1. We exchanged the feeder and jumper, the RTWP didn't change with jumper and feeder ;
2. We replaced the all WRFU and WBBP board, the high RTWP not disappeared; 3. We blocked GSM all TRX in the morning during idle hour, but no any improvement. 4. After we monitor several days KPI,we found that the RTWP can reach the normal level on sometime , we doubted that it was interference cause RTWP.so we check the installation, we saw one antenna very near the Huawei antenna.

Negotiated with the other customer regarding reducing their MW power, after they reduce the power ,the RTWP can reach normal value.

1) DL power congestion solved by admission control and CPICH power optimization


Cells suffer from high DL power congestion affecting accessibility KPIs (RRC, CS RAB & PS RAB %) We took two actions: Optimize CPICH power by decreasing it in both carriers MOD UCELL:CellId=40483, PCPICHPower=340; MOD UCELL:CellId=40488, PCPICHPower=340; optimize the DL load threshold by controlling the admission control (CAC) of conversational AMR service, conversational non-AMR service, and handover scenarios thresholds, where they decide when to accept the call only if the load after admitting it is less than above four thresholds depending on type (default values: 80, 80, 85, 75%) MML Commands MODUCELLCAC:CellId=40483,DlConvAMRThd=92,DlConvNonAMRThd=92,DlOtherThd=90, DlHOThd=93, DlCellTotalThd=95; MODUCELLCAC:CellId=40488,DlConvAMRThd=92,DlConvNonAMRThd=92,DlOtherThd=90, DlHOThd=93, DlCellTotalThd=95; 40483: DL power congestion released and accessibility KPIs improved

40488: DL power congestion released and accessibility KPIs improved

1) PS Data traffic Increases drastically & HSDPA

traffic Decreases Simultaneously changing thresholds

due

to

Suddenly There is an Increases in PS data traffic & decreases in HSDPA traffic

First we need to check there is increases or decreases in RAB attemts

If we look to HS RAB Attempts then there is an 50 % Decreases hence the HS traffic decreases.
Analysis

We checked the codes assigned for HS services. But before & after codes assigned is same there is no change in PS & HS assigned codes. Means for HS it is 7 and remaining codes is for R99 Then we found a change in parameter below that has been changed from D768 to D64

Parameter Name Meaning

DL BE traffic threshold on HSDPA Rate threshold for decision to use HS-DSCH to carry DL PS background/interactive services. When the maximum DL service rate is higher than or equal to this threshold, the service will be carried on HSDSCH. Otherwise, it will be carried on DCH. D8, D16, D32, D64, D128, D144, D256, D384, D512, D768, D1024, D1536, D1800, D2048, D3600, D7200, D8640, D10100, D13900 D64

GUI Value Range Recommended Value

After returning it back to its original

4) Relief High UL CE congestion by LDR action


site 4092 suffers from high UL CE congestion affected PS RAB SR (Success Rate)% Load Reshuffling (LDR) is required to reduce the cell load and increase the access success rate. We enable Cell Credit(CELL_CREDIT_LDR) LDR, NodeB credit(NODEB_CREDIT_LDR ) LDR, Cell Group Credit (LCG_CREDIT_LDR) MODUCELLALGOSWITCH:CellId=40926, CELL_CREDIT_LDR-1; MODUCELLALGOSWITCH:CellId=40927, CELL_CREDIT_LDR-1; MODUNODEBALGOPARA:NodeBName="C1_0_DEL4092P1(DSK_TE)",NodeBLdcAlgoSwitch=NOD EB_CREDIT_LDR-1&LCG_CREDIT_LDR-1; as both cells under same node-b Then I define the 1st, 2nd , 3rd actions of the LDR to ones that can solve the UL CE problem, as not all actions in LDR can solve UL CE as inter-freq HO as example 4092 high CE Usage and after LDR action the CE usage decreased

CE Congestion released & PS RAB SR improved

5) Poor PS CSSR due to UL Power congestion


For lot of cells had this problem we took on each cell one or more of below actions: 1) increase UlTotalEquseNum from 160 to 200 As in CAC, UL is admitted if algorithm 2 is applied which is the case if {{{{{(ENUtotal + ENUnew)/ UlTotalEqUserNum}}}}} < {{{{{UlNonCtrlThdForHo/AMR/NonAMR/Other}}}}}
2) Activated UL LDR CE/Power and modified UL LDR actions to correspond to UL CE We enable Cell Credit(CELL_CREDIT_LDR) LDR, NodeB credit(NODEB_CREDIT_LDR ) LDR, Cell Group Credit (LCG_CREDIT_LDR) and UL_UU_LDR-1; 3) lower UL LDR trigger threshold from 65 to 55

To make LDR work faster UlLdrTrigThd=55, UlLdrRelThd=45;

Conclusion: Top 3 worst cells UL power Cong recover:

6) IRAT Performance Improvement Actions


Cause CS IRAT and PS IRAT bad bec high physical channel failure at worst cells (which refers to failure due to R Analysi F problems) + failure due to congestion (found only in CS as PS has no preparation) s: After finding out 2 major reasons for CS and PS IRAT failures we investigate further and found bellow men tioned conclusions Handli Now we know that route cause of poor IRAT performance was congestion at target 2G cells and poor 2G ng coverage at time of IRAT handovers. Capacity augmentation done by 2G team on request for congested Proces 2G cells on and PS IRAT performance improved after this. s: We also done bellow mentioned parameter optimization to further improve IRAT performance as it was still bellow baseline 1) 3A event: The estimated quality of the currently used UTRAN frequency is below a certain threshold and the estimated quality of the other system is above a certain threshold

QOtherRAT + CIOOtherRAT TOtherRAT + H3a/2 QUsed TUsed - H3a/2


Recommended values of TOtherRAT: Recommended Value 16, namely -95dBm 16, namely -95dBm 16, namely -95dBm Parameter TargetRatCsThd TargetRatR99PsThd TargetRatHThd We changed TargetRatHThd=16 to 26 2) Also PenaltyTimeForPhyChFail=30 to 60 at worst cells

Parameter ID

PenaltyTimeForPhyChFail

Parameter Name Inter-RAT HO Physical Channel Failure Penalty Timer Meaning Duration of the penalty for inter-RAT handover failure due to physical channel failure. The UE is not allowed to make inter-RAT handover attempts within the penalty time. For details about the physical channel failure, see 3GPP TS 25.331. s 0~65535 30

Unit Actual Value Range Default Value


3) In 3A:

CIO is composite of CIO(2G) + CIOoffset(3G 2G), so we decreased the CIOoffset to give less priority to

QOtherRAT + CIOOtherRAT TOtherRAT + H3a/2

2G to HO to it 4) Increase timer T309

Parameter ID

T309

Parameter Name Timer 309 Meaning T309 is started after the UE is reselected to a cell belonging to another radio access system in connected mode, or the CELL CHANGE ORDER FROM UTRAN message is received. It is stopped after the UE is successfully connected in the new cell. The UE will continue the connection to UTRAN upon expiry. Protocol default value is 5. s 1~8 5

Unit Actual Value Range Default Value

7) different RTWP between F1 and F2 of the same sector


during normal audits of the network we found that for some sectors there is a diffrence in the RTWP between F1 and F2 cell of the same sector,

To check we have to verify the following three parts: 1. we had to make sure that the equipment is not faulty to check the equipment we swapped the sectors between sector1 and sector3 (connected the antenna of sector3 to the RRU and the feaders of sector1 and antenna of sector1 to the RRU and the feaders of sector3) and when we did that we found that the RTWP is the same and didnt move from sector3 to sector 2. we have to make sure that it is no external interference checked using spectrum annalizer and we found that there is no external interference 3. we have to confirm it is traffic load or not

was the problem,

basically the second carrier is used for data traffic, and it was noticed that the HSDPA traffic on this cell is relatively high compared with the trend of the first carrier, Such traffic difference especially in HSDPA and HSUPA can be the reason of the difference between RTWP of the first and second carrier cells. It is so clear from the below hourly snap that the RTWP is increasing and decreasing with the change of the HSDPA and HSUPA number of users

here is F2 G31377

here is for F1 G31373 and F2 G31377

1) HSDPA low throughput analysis


DT of a cluster we found that the throughput is not high in special areas as per the below snap

Radio conditions was good, CQI of that road was very good (average more than 23) which we verified as per the below snap

the IUB utlization is normal and there is no congestion as well as the power, below snaps of the IUB utiliz ation at the test time:

we went deeper to check the number of codes assigned to the UE during the test we found that the number of codes was very low as per the snap

Reason we found that the NodeB lisence for the number of codes was normal and the feat ure of the dynamic codes allocation is activated on the nodeB, but when we checked the average number of users ber hour in a day we found that the cell is covering alot of users of HSDPA services below snap to show the number of users hourly

8) HSDPA Rate was LOW due to 16QAM not activated


was swapping vendor and after we swapped the first cluster we found the HSDPA rate is Low comparing to the value we have before Swap 1- We sent a DT Engineer and started to make a test. 2- Also we checked the IUB BW and the number of HSDPA users configured on the sites and the number of codes configured for each site. 3- From point 2 we found everything is OK. 4- But from the DT log files we found the following: 5- the DT log files we found the following, We found all the samples under the QPSK and zero samples at 16 QAM.

we checked the NodeB configuration found the 16QAM switch enabled on all the sites from LST MACHSPAR we found one parameter was not exist in our NodeB License: HSDPA RRM license, after activating it 16QAM worked and throughput for the same HSDPA traffic increased

1) Idle Mode 2G-3G optimization to stay more on 3G


To offload traffic over 2G and to make user under 3G coverage more, Change parameter FDDQMIN from -10 dB to -14 dB on 2G side

SSEARCHRAT from -8 to -9 on 3G side Inter-RAT measurement:

Squal SsearchRATm Qqualmeas Qqualmin SsearchRATm Qqualmeas Qqualmin + SsearchRATm

3G Coverage and traffic increase which can be seen from increase in HSDPA throughput ( more user in 3G for longer time duration) also face power and CE blocking due to increase in 3G users on those sites which was fixed.

HSDPA UE Mean Cell (increased after change, but reduced again since 20-Oct, probably due
to increased of power blocking)

Huawei Confidential

1) Low PS traffic on F2 cells due to missing Blind HO neighbors.


-The problem was that After F3 Expansion on one site and during KPIs check for the period before expansion We found that site had very low PS traffic (very low PS RAB ATT) On F2 cells and it have very high traffic (very High PS RAB ATT) on F1 cells while the network strategy dont permit for this scenario to be happened . We found that the blind HO was not defined from F1F2 ADD UINTERFREQNCELL:RNCID=1,CELLID=5022,NCELLRNCID=1,NCELLID=5025,BLINDHOFLAG=TRUE, NPRIOFLAG=FALSE,INTERNCELLQUALREQFLAG=FALSE;

Start Time 09/15/2012 00:00:00 09/15/2012 00:00:00 09/15/2012 00:00:00 09/15/2012 00:00:00 09/15/2012 00:00:00 09/15/2012 00:00:00 09/16/2012 00:00:00 09/16/2012 00:00:00 09/16/2012 00:00:00 09/16/2012 00:00:00 09/16/2012 00:00:00 09/16/2012 00:00:00 09/17/2012 00:00:00 09/17/2012 00:00:00 09/17/2012 00:00:00 09/17/2012 00:00:00 09/17/2012 00:00:00 09/17/2012 00:00:00 09/18/2012 00:00:00 09/18/2012 00:00:00 09/18/2012 00:00:00 09/18/2012 00:00:00 09/18/2012 00:00:00 09/18/2012 00:00:00 09/19/2012 00:00:00 09/19/2012 00:00:00 09/19/2012 00:00:00 09/19/2012 00:00:00 09/19/2012 00:00:00 09/19/2012 00:00:00 09/20/2012 00:00:00 09/20/2012 00:00:00 09/20/2012 00:00:00 09/20/2012 00:00:00 09/20/2012 00:00:00 09/20/2012 00:00:00 09/21/2012 00:00:00 09/21/2012 00:00:00 09/21/2012 00:00:00 09/21/2012 00:00:00 09/21/2012 00:00:00 09/21/2012 00:00:00 09/22/2012 00:00:00 09/22/2012 00:00:00 09/22/2012 00:00:00 09/22/2012 00:00:00 09/22/2012 00:00:00 09/22/2012 00:00:00 09/23/2012 00:00:00 09/23/2012 00:00:00 09/23/2012 00:00:00 09/23/2012 00:00:00 09/23/2012 00:00:00 09/23/2012 00:00:00

Period 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440

NE Name TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1

Modification date
1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 1440 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1 TUBRNC1

BSC6900UCell Label=GNH089C, CellID=8949 Label=GNH089B, CellID=8948 Label=GNH089A, CellID=8947 Label=GNH089F, CellID=8952 Label=GNH089E, CellID=8951 Label=GNH089D, CellID=8950 Label=GNH089C, CellID=8949 Label=GNH089B, CellID=8948 Label=GNH089A, CellID=8947 Label=GNH089F, CellID=8952 Label=GNH089E, CellID=8951 Label=GNH089D, CellID=8950 Label=GNH089C, CellID=8949 Label=GNH089B, CellID=8948 Label=GNH089A, CellID=8947 Label=GNH089F, CellID=8952 Label=GNH089E, CellID=8951 Label=GNH089D, CellID=8950 Label=GNH089C, CellID=8949 Label=GNH089B, CellID=8948 Label=GNH089A, CellID=8947 Label=GNH089F, CellID=8952 Label=GNH089E, CellID=8951 Label=GNH089D, CellID=8950 Label=GNH089C, CellID=8949 Label=GNH089B, CellID=8948 Label=GNH089A, CellID=8947 Label=GNH089F, CellID=8952 Label=GNH089E, CellID=8951 Label=GNH089D, CellID=8950 Label=GNH089C, CellID=8949 Label=GNH089B, CellID=8948 Label=GNH089A, CellID=8947 Label=GNH089F, CellID=8952 Label=GNH089E, CellID=8951 Label=GNH089D, CellID=8950 Label=GNH089C, CellID=8949 Label=GNH089B, CellID=8948 Label=GNH089A, CellID=8947 Label=GNH089F, CellID=8952 Label=GNH089E, CellID=8951 Label=GNH089D, CellID=8950 Label=GNH089C, CellID=8949 Label=GNH089B, CellID=8948 Label=GNH089A, CellID=8947 Label=GNH089F, CellID=8952 Label=GNH089E, CellID=8951 Label=GNH089D, CellID=8950 Label=GNH089C, CellID=8949 Label=GNH089B, CellID=8948 Label=GNH089A, CellID=8947 Label=GNH089F, CellID=8952 Label=GNH089E, CellID=8951 Label=GNH089D, CellID=8950

Carrier

F1

F2

F1

F2

F1

F2

F1

F2

F1

F2

F1

F2

F1

F2

F1

F2

F1

F2

RRC succ RRC rate RRC att succ(RAN AMR (RAN12) (RAN12) 12) RAB SR (%) (times) (times) (none) 99.936 15830 15820 100 99.908 35884 35851 99.845 99.923 31532 31508 100 0 0 100 0 0 100 0 0 100 99.956 15938 15931 100 99.931 34830 34806 100 99.918 32950 32923 99.881 0 0 100 0 0 100 0 0 100 99.952 16991 16983 100 99.911 30405 30378 100 99.894 34031 33995 100 0 0 100 0 0 100 0 0 100 99.916 15504 15491 100 99.885 34989 34949 99.843 99.915 29601 29576 100 0 0 100 0 0 100 0 0 100 99.92 16372 16359 100 99.897 31101 31069 100 99.941 29261 29244 99.78 100 1 1 100 100 1 1 100 0 0 100 99.907 19448 19430 100 99.951 27057 27044 100 99.565 28324 28201 100 0 0 100 100 1 1 100 100 1 1 100 99.96 17602 17595 99.519 99.94 38351 38328 99.703 99.932 32648 32626 100 0 0 100 0 0 98.496 100 1 1 98.888 99.934 18324 18312 100 99.947 26804 26790 100 99.924 30626 30603 100 0 0 100 0 0 99.152 0 0 98.611 99.942 17461 17451 100 99.928 25335 25317 99.584 99.942 27838 27822 100 0 0 100 0 0 100 0 0 100

AMR RAB Attempt (none) 352 646 662 10 19 6 300 572 847 5 6 9 375 620 626 8 14 9 318 640 705 6 12 15 312 406 455 71 131 115 336 561 609 96 123 131 208 675 574 38 133 90 371 543 679 58 118 72 337 481 447 55 56 86

No.of AMR RAB failure (none) 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 2 0 0 2 1 0 0 0 0 1 1 0 2 0 0 0 0

AMR PS RAB RAB Setup Success PS RAB Attempt (none) SR (none) (none) 352 99.907 16295 645 99.961 36233 662 99.966 32585 10 100 4 19 100 20 6 100 5 300 99.933 16489 572 99.98 35060 846 99.982 33808 5 100 13 6 50 2 9 100 10 375 99.919 17419 620 99.97 30656 626 99.962 34727 8 100 8 14 100 8 9 100 6 318 99.93 15929 639 99.966 35298 705 99.98 30358 6 100 8 12 100 10 15 100 13 312 99.983 6025 406 99.971 14128 454 99.99 11010 71 99.932 10375 131 99.935 16997 115 99.913 18526 336 99.876 1615 561 99.963 5477 609 100 3450 96 99.878 16484 123 99.984 19116 131 99.949 21689 207 99.973 3754 673 99.917 4830 574 99.838 4940 38 99.796 5417 131 99.909 16602 89 99.908 10978 371 100 3460 543 99.955 4491 679 99.979 4951 58 99.789 7618 117 99.933 12118 71 99.899 11923 337 99.974 3933 479 99.977 4348 447 100 1903 55 99.852 6792 56 99.989 9998 86 99.915 11779

9) PS RAB Succ Rate Degraded due to DRD Parameter and Blind HO


PS RAB degraded below baseline on 1st Sept 2012. From statistic, it is cause by top worst 2nd cells and not related to all cells in RNC level.

was due to is PS RAB UUFail with its sub counter PS RAB PhyChFail and PS RAB UuNoReply.

The reason for this degrade was following two reasons that after setting them right the things returned normal as seen in above 2 figures
1. Blind HO Flag for Multi carrier cells inter-frequency relation was wrong setting

10) CSSR PS Degraded due to high PS Code Congestion after swap


PS CSSR was low because after investigating founf Failed due to Code Congestion

Later For soution We decided to change the Algorithm and Open the LDR in Cell Level at 2 sectors which had code congestion. The Parameters are MOD UCELLALGOSWITCH: CellId=10051, NBMLdcAlgoSwitch=CELL_CODE_LDR-1; MOD UCELLLDR: CellId=10051, DlLdrFirstAction=CodeAdj, DlLdrSecondAction=Berated, DlLdrBERateReductionRabNum=1, GoldUserLoadControlSwitch=ON; PS CSSR Improved after Opening the LDR parameters

11) Low CS IRAT Handover Success Rate due to miss configuration in GSM band
The requested CS IRAT Handover Success Rate target is 95% but these 2 sites (3 sectors each) could only achieve around 60% during busy hour as shown in picture below

main reason for the CS IRAT HO failure is due to IRATHO.FailOutCS.PhyChFail.

Note that blue counter is sum of the other 2 Next, checking on cell_gcell counter, found that almost all of the failures happened to the cosite GSM as highlighted below

Row Labels UCELL_GCELL:43017:740:01:14149:19093 UCELL_GCELL:43017:740:01:14149:19383 UCELL_GCELL:43017:740:01:14149:2812 UCELL_GCELL:43017:740:01:14149:40492 UCELL_GCELL:43017:740:01:14149:41001 UCELL_GCELL:43017:740:01:14149:41002 UCELL_GCELL:43017:740:01:14149:41003 UCELL_GCELL:43018:740:01:14149:19093 UCELL_GCELL:43018:740:01:14149:19383 UCELL_GCELL:43018:740:01:14149:19643 UCELL_GCELL:43018:740:01:14149:2812 UCELL_GCELL:43018:740:01:14149:40492 UCELL_GCELL:43018:740:01:14149:41001 UCELL_GCELL:43018:740:01:14149:41002 UCELL_GCELL:43018:740:01:14149:41003 UCELL_GCELL:43019:740:01:14149:19093 UCELL_GCELL:43019:740:01:14149:19383 UCELL_GCELL:43019:740:01:14149:2811 UCELL_GCELL:43019:740:01:14149:2812 UCELL_GCELL:43019:740:01:14149:2813 UCELL_GCELL:43019:740:01:14149:40492 UCELL_GCELL:43019:740:01:14149:41001 UCELL_GCELL:43019:740:01:14149:41002 UCELL_GCELL:43019:740:01:14149:41003 UCELL_GCELL:43027:740:01:14150:41011 UCELL_GCELL:43027:740:01:14150:41012 UCELL_GCELL:43027:740:01:14150:41013 UCELL_GCELL:43027:740:01:16202:19082 UCELL_GCELL:43027:740:01:16202:19083 UCELL_GCELL:43027:740:01:16202:19261 UCELL_GCELL:43027:740:01:16202:19262 UCELL_GCELL:43028:740:01:14150:41011 UCELL_GCELL:43028:740:01:14150:41012 UCELL_GCELL:43028:740:01:14150:41013 UCELL_GCELL:43028:740:01:16202:19261 UCELL_GCELL:43028:740:01:16202:19262 UCELL_GCELL:43028:740:01:16202:40071 UCELL_GCELL:43028:740:01:16202:40073 UCELL_GCELL:43029:740:01:14150:41011 UCELL_GCELL:43029:740:01:14150:41012 UCELL_GCELL:43029:740:01:14150:41013 UCELL_GCELL:43029:740:01:16202:19082 UCELL_GCELL:43029:740:01:16202:19083 UCELL_GCELL:43029:740:01:16202:19261 UCELL_GCELL:43029:740:01:16202:19262 UCELL_GCELL:43029:740:01:16202:40071 UCELL_GCELL:43029:740:01:16202:40073

Sum of VS.IRATHO.AttOutCS.GCell 1 3 29 2 17 6 5 1 2 2 11 2 10 24 3 3 2 0 103 0 4 1 1 19 59 6 23 85 9 8 7 3 40 16 6 17 7 1 16 8 78 7 4 33 16 6 6

Sum of VS.IRATHO.SuccOutCS.GCell 1 2 27 2 2 0 0 1 2 2 11 2 2 4 0 3 2 0 101 0 4 0 0 3 6 3 0 83 9 8 6 0 11 0 5 16 5 1 3 1 16 7 4 31 15 6 6

Numb

Checking from the Ios trace, it is found that after the RNC sends the RRC_HO_FROM_UTRAN_CMD_GSM to UE, the UE replied an RRC_HO_FROM_UTRAN_FAIL, and the reason is physicalChannelFailure as shown below.

The problem was that the GSM cell when created and configured to be in co-BCCH mode which the main BCCH is in 850MHz while 1900MHz as below from ADD GCELL

But when GSM is defined as external neighbor to the UMTS, it was defined in a band different from the actual one
TYPE Freq. Band Meaning: This parameter specifies the frequency band of new cells. Each new cell can be allocated frequencies of only one frequency band. Once the frequency band is selected, it cannot be changed. GSM900: The cell supports GSM900 frequency band. DCS1800: The cell supports DCS1800 frequency band. GSM900_DCS1800: The cell supports GSM900 and DCS1800 frequency bands. GSM850: The cell supports GSM850 frequency band. GSM850_DCS1800: The cell supports GSM850 and DCS1800 frequency bands. PCS1900: The cell supports PCS1900 frequency band. GSM850_PCS1900: The cell supports GSM850 and PCS1900 frequency bands. TGSM810: The cell supports TGSM810 frequency band. GUI Value Range: GSM900, DCS1800, GSM900_DCS1800, GSM850, PCS1900, GSM850_1800, GSM850_1900, TGSM810 Unit: None Actual Value Range: GSM900, DCS1800, GSM900_DCS1800, GSM850, PCS1900, GSM850_1800, GSM850_1900, TGSM810

MML Default Value: None Recommended Value: None Parameter Relationship: None Service Interrupted After Modification : Not involved Impact on Network Performance: None

ADD UEXT2GCELL):

BandInd Inter-RAT Cell Frequency Band Indicator

Meaning: When the inter-RAT cell frequency number is within the range 512-810, the parameter indicates whether this frequency number belongs to the DSC1800 or PCS1900 frequency band. GUI Value Range: GSM900_DCS1800_BAND_USED(Use GSM900M or 1800M frequency band), PCS1900_BAND_USED(Use GSM1900M frequency band) Unit: None Actual Value Range: GSM900_DCS1800_BAND_USED, PCS1900_BAND_USED MML Default Value: GSM900_DCS1800_BAND_USED Recommended Value: GSM900_DCS1800_BAND_USED Parameter Relationship: None Service Interrupted After Modification : No (No impact on the UE in idle mode) Impact on Network Performance: None

So when the UE try to make the handover to GSM PCS1900MHz band, the RNC had instructed the UE to search for DCS1800 band which caused the failure.

After the implementation, the CS IRAT Handover Success Rate has improved obviously as below:

12) Abnormal high RTWP due to improper setting on NodeB


During cluster acceptance O operator swap project, it was found cell W6374B3 and W6229B3 always be the top worst cells in AMR drops. AMR drops for the 7days.

PS DCR was also having relatively poor KPIs, which was 5~30% in these 2 cells.

Scanning through for possible reason of drops, it was found both cells having abnormal high RTWP

We checked hardware problems related to parameters as following:

It was found there is improper setting in desensitization intensity (DSP DESENS) in both problem cells as shown below.

1. After revert, RTWP of both cells back to normal, on level of -105dBm as shown below.

2. PS DCR of these 2 cells (W6229B3 & W6374B3) showed significant improvement to level of 1% as shown below.

13) Poor PS IRAT Handover SSR due to congestion issue on adjacent 2G sites
Symptom: PS IRAT handover SSR of sector B and C degraded significantly at busy time.

Cause Analysis:

Handling Process:

1. 2. 3. 4. 5. 6. 1.

Missing neighbouring 2G cells; Poor coverage; IRAT configuration (3G or 2G side); Congestion on adjacent 2G sites; PS - CN Topology and configurations ( Intra-SGSN or Inter-SGSN handover, Routing Area Update failures Others Checked the CS IRAT HO SSR of the site, which is much better than PS IRAT HO SSR and acceptable. So and coverage should not be the issue; (most probably is congestion as CS prepare channel while PS dont) 2. PS IRAT HO SSR degraded only at busy time, which is most probably caused by congestion issue on adjac sites. Checked TBF, GPRS and Edge congestion situation of adjacent 2G sites, and there are serious conges found.

T591B:

T591C:

T6425B:

T6574A:

T5565C:

3. After expansion on adjacent 2G sites, PS IRAT HO SSR was improved significantly.

14) Analysis Report for bad RTWP on one NodeB caused by External Interference
bad RTWP on one NodeB.

Action Plan:
1st Action: Request FLM team to perform below actions: Check connectors/combiner. Replace combiner, Check WMPT, And if still issue not clear, then re-commission the site.

After performing all above actions the RTWP issue still exist on this site (3 sectors), suspected internal/external interference.

2nd Action: Request to change UARFCN from Freq1 band 1 (UL 9613 DL 10563) to Freq Band 6 (UL 9738 DL 10688) which is 25M apart from 1st freq on site 120031_A_Dahlan_3G for trial purpose, After change frequency RTWP normal

So now we know that there is interference on the 1st freq, so we will continue using this 2nd trial freq until interference is solved in first one, but the problem with 2nd freq is that the KPIs where not good as seen below: CSSR decrease: RRC.FailConnEstab.NoReply bad. DCR Increase: VS.RAB.AbnormRel.PS.RF.SRBReset/VS.RAB.AbnormRel.PS.RF.ULSync /VS.RAB.AbnormRel.PS.RF.UuNoReply bad. Traffic increased.

So we want to find what is the problem 3rd Action: the first thing found wrong on 2nd freq from Audit Parameters is that there is no inter-freq HO activated as in 1st freq from below parameter,

we found the HOSWITCH_HO_INTER_FREQ_HARD_HO_SWITCH=FALSE which states that there is no IFHO performed


Note that there is another switch HO_ALGO_LDR_ALLOW_SHO_SWITCH: this switch is to activate the inter-freq HO triggered by LDR
and LDR only, it means whether LDR action inter-freq can trigger inter-freq HO or not, while the previous one is whether inter-freq is activated or not which is a must as if not activated this parameter will not have any meaning

so before in 1st freq some UEs performed inter-freq as there was no good intra-freq cell, so if no interfreq the UE will keep work on the current freq that will increase traffic on current freq and also this will result in more CDR probability

After fix switch: IFHO comes normal, here below KPI of IFHO success rate

there is improvement in all KPIs but still not good, so we need to improve more 4th Action: we wanted to enhance the KPIs for the 2nd freq even more, Check propagation delay distribution for site 120031_A_Dahlan_3G before and after changing the freq: Found site overshooting after change frequency :
ID Counter Description Number of RRC Connection Establishment Requests with Propagation Delay of 0 Number of RRC Connection Establishment Requests with Propagation Delay of 1 Number of RRC Connection Establishment Requests with Propagation Delay of 2 Number of RRC Connection Establishment Requests with Propagation Delay of 3 Number of RRC Connection Establishment Requests with Propagation Delay of 4 Number of RRC Connection Establishment Requests with Propagation Delay of 5 Number of RRC Connection Establishment Requests with Propagation Delay of 6~9 Number of RRC Connection Establishment Requests with Propagation Delay of 10~15 Number of RRC Connection Establishment Requests with Propagation Delay of 16~25 Number of RRC Connection Establishment Requests with

73423486 VS.TP.UE.0 73423488 VS.TP.UE.1 73423490 VS.TP.UE.2 73423492 VS.TP.UE.3 73423494 VS.TP.UE.4 73423496 VS.TP.UE.5 73423498 VS.TP.UE.6.9 73423510 VS.TP.UE.10.15 73423502 VS.TP.UE.16.25 73423504 VS.TP.UE.26.35

ID

Counter

Description Propagation Delay of 26~35

73423506 VS.TP.UE.36.55

Number of RRC Connection Establishment Requests with Propagation Delay of 36~55

73423508 VS.TP.UE.More55 Number of RRC Connection Establishment Requests with Propagation Delay Greater than 55 Each propagation delay represents three chips. The propagation distance of one chip is 78 m. Therefore, one propagation delay corresponds to 234 m. When the propagation delay is 0, it indicates that the UE is 0-234 m away from the base station. When the propagation delay is 1, it indicates that the UE is 234-468 m away from the base station. When the propagation delay is 2, it indicates that the UE is 468-702 m away from the base station. ...... When the propagation delay is 55, it indicates that the UE is 12870-13104 m away from the base station

Here is before changing freq for 3 sectors

Here is after changing and RTWP was fixed

So as u can see the 2nd freq has more coverage, this comes from the fact that 2nd freq has no continues coverage as 1st freq, as not commonly used freq by other neighbor sited, so this resulted in less HO that made coverage is more

1) Bad Quality (ECIO) for due to high Users/RTWP


There was bad Ec/No as seen below in DT

This is not a permanent issue as found mainly in busy hour as seen below

The problem mainly was due to high traffic as seen below when number of users increase the RTWP increase up to -92dB which degrade the quality (Ec/No) in UL which is the same in DL

So the problem was due to not external interference but high traffic So there are number of solutions to solve high traffic

1) SHO failure due to Iur congestion


The main problem in this swap was IuR congestion

Counter VS.SHO.FailRLRecfgIur .OM.Tx VS.SHO.FailRLRecfgIur .CongTx

Description Number of failed radio link synchronous reconfigurations by DRNC on Iur interface because of OM intervention (cause value: OM Intervention) Number of failed radio link synchronous reconfigurations by DRNC on Iur interface because of insufficient RNC capability (cause value: Cell not Available, UL Scrambling Code Already in Use, DL Radio Resources not Available, UL Radio Resources not Available, Combining Resources not Available, Measurement Temporarily not Available, Cell Reserved for Operator Use, Control Processing Overload, or Not enough User Plane Processing Resources) Number of failed radio link synchronous reconfigurations by DRNC on Iur interface because of improper configurations (cause value: UL SF not supported, DL SF not supported, Downlink Shared Channel Type not supported, Uplink Shared Channel Type not supported, CM not supported, Number of DL codes not supported, or Number of UL codes not supported) Number of failed radio link synchronous reconfigurations by DRNC on Iur interface because of hardware failure (cause value: Hardware Failure) Number of failed radio link synchronous reconfigurations by DRNC on Iur interface because of insufficient RNC transmission capability (cause value: Transport Resource Unavailable)

VS.SHO.FailRLRecfgIur .CfgUTx

VS.SHO.FailRLRecfgIur .HW.Tx VS.SHO.FailRLRecfgIur .TransCongRx

Note that if the counter is Tx it refers to DRNC while Rx refers to SRNC

According to the RNC statistics, the DRNC (ZTE) shows a big amount of failures (VS.SHO.FailRLRecfgIur.CongTx, VS.SHO.FailRLAddIur.Cong.Tx and VS.SHO.FailRLSetupIur.CongTx) than the SRNC(Huawei). Please find below the respective pictures.

After investigation of the traces was detected the next problems which is there is big congestion in code at ZTE RNC, here below is counters for some cells in ZTE RNC
RNCId 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 79 CellId 25656 25652 25655 14242 28095 28891 28896 45053 27894 62342 24351 62341 14245 62343 25651 53245 3754 25656 43752 3855 25652 28094 28092 43752 24352 17993 43752 25656 25652 25652 62342 62343 3653 27896 CellName 256C5_6 256C5_2 256C5_5 142U4_2 280C9_5 288C9_1 288C9_6 450C5_3 278C9_4 623U4_2 243C5_1 623U4_1 142U4_5 623U4_3 256C5_1 532U4_5 037C5_4 256C5_6 437C5_2 038C5_5 256C5_2 280C9_4 280C9_2 437C5_2 243C5_2 179C9_3 437C5_2 256C5_6 256C5_2 256C5_2 623U4_2 623U4_3 036C5_3 278C9_6 Time(As day) 2012-07-18 2012-07-18 2012-07-18 2012-07-21 2012-07-18 2012-07-18 2012-07-18 2012-07-18 2012-07-22 2012-07-25 2012-07-18 2012-07-18 2012-07-18 2012-07-25 2012-07-26 2012-07-18 2012-07-18 2012-07-20 2012-07-31 2012-07-18 2012-07-27 2012-07-18 2012-07-18 2012-07-29 2012-07-18 2012-07-23 2012-07-30 2012-07-19 2012-07-23 2012-07-19 2012-07-26 2012-07-26 2012-07-31 2012-07-29 VS.RAC.DCCC.Fail.Code.Cong 3.0000 754.0000 0 822.0000 0 77.0000 0 85.0000 63.0000 808.0000 89.0000 223.0000 0 173.0000 1562.0000 0 0 0 1025.0000 34.0000 109.0000 18.0000 874.0000 906.0000 30.0000 585.0000 871.0000 1.0000 31.0000 200.0000 219.0000 1157.0000 560.0000 1247.0000 VS.RAB.SFOccupy.Ratio 0.9136 0.9121 0.9107 0.9097 0.9085 0.9080 0.9080 0.9080 0.9072 0.9068 0.9067 0.9066 0.9062 0.9060 0.9059 0.9056 0.9051 0.9051 0.9051 0.9049 0.9049 0.9049 0.9049 0.9048 0.9047 0.9047 0.9045 0.9045 0.9044 0.9043 0.9041 0.9041 0.9040 0.9040 VS.RAB.SFOccupy.MAX 251.0000 256.0000 246.0000 255.0000 240.0000 248.0000 243.0000 253.0000 255.0000 254.0000 255.0000 254.0000 254.0000 255.0000 256.0000 240.0000 255.0000 247.0000 255.0000 256.0000 256.0000 256.0000 248.0000 256.0000 248.0000 255.0000 256.0000 246.0000 255.0000 253.0000 256.0000 256.0000 256.0000 256.0000 VS.RAB.SFOccupy 233.8861 233.5064 233.1368 232.8829 232.5664 232.4595 232.4520 232.4490 232.2551 232.1405 232.1035 232.1025 231.9770 231.9387 231.9010 231.8272 231.7155 231.6953 231.6940 231.6653 231.6500 231.6447 231.6443 231.6314 231.6035 231.5929 231.5526 231.5394 231.5190 231.4931 231.4475 231.4468 231.4336 231.4212

So ZTE activated some algorithms on its side and changed some parameters to solve the problem, which was actually solved as seen below

15) DCR KPI degraded after NodeB rehoming from one RNC to another
Phenomen on Description rehoming of 29 NodeBs to a new RNCon 24May. The following showed the abnormal release (DCR nom) increased significantly after 24Jul while normal release (DCR denom) remained almost same level.

Cause Analysis Handling Process

1) Missing ncells 2) RNC parameters or switches 3) RNC license This is a case of post rehoming KPI degradation, thus we, first of all, checked the ncells script for the rehoming operation. Found to have few missing ncells for Inter RNC neighboring relations. Complete ncells added on 25Jul night. DCR improved around 60%. Still it was suspected there is another reason behind the degradation.

After checking all the KPI again, it was found there is abnormal increase in CS traffic after rehoming. Thus we started to suspect these increase are related to the DCR degradation.

Then we went into details to check raw counters of every KPIs, and found that the CS IRAT HO attempts decreased till almost zero value, same went to PS attempts as well. This explained the reason why DCR increased and CS traffic increased abnormally as the CS calls have been kept and dragged in 3G till call drops.

3. Based on this assumption, we tried to compare the configuration of RNC Depok and RNC Depok2. No different in term of parameters and switches configuration. 4. Then we continued the verification on RNC license, found there was missing item called Coverage Based Inter-RAT Handover Between UMTS and GSM/GPRS=ON in RNC Depok2.

16) External Interference


Interference Found in below cells. Amar_Taru (2286) 3rd Sector. Panneri (2149) 1st Sector.

Interference Test Analysis of Amar_Taru 3rd Sector / Panneri 1st Sector

Field test observation we had changed Azimuth of Panneri 1st Sector from 40* to 160* on that time RTWP suddenly decreased that mean some Unknown frequency generating by unknown source which is available near to Andheri Station which is same or very close to RCOM UL Centre Frequency (1961.5MHz) .

17) AMR Call Drop Resolution By 2D 2F Parameter

change
RNC having high AMR call drop rate

Phenome non Descripti on

1. It is found that AMR call drop is happening after the compress mode is triggered from NASTAR.

Analysis

Change the 2D 2F parameter setting of issued cells from:

After Parameter change:


Solution

there is improvement in AMR call drop rate after the changes done in IRAT 2D 2F parameter settings.

18) Low PS CSSR due to Uplink Power Congestion


Low PS CSSR on sector B of the site at busy time.

Cause Analysis 1. Resource Congestion; 2. Improper configuration; 3. RF issue; 4. CN issue; 5. Others

Handling Process: 1. Checked the traffic of the sector B, and the site has high traffic;

RTWP is very high at busy time

2. Check the PS RAB establish congestion on M2000, and the site has significantly high uplink power congestion;

The HSUPA user number always hits the limit (20);

3. Analyze the coverage on Naster. The analysis result shows that the site can reach a distant area (TP=20, Distance=4.6km).

4. With the Nastar result, we then check the site on Google earth. It is clear that the site has overshooting and overlapping issue. Adjusting azimuth or downtilt is suggested.

Adjust the downtilt and azimuth as the red arrow shows, the issue was recovered with the reduced traffic.

19) WCDMA DL Power Congestion Troubleshooting


we have found DL power congestion instatntly

If TCP ratio is very high, it means downlink power congestion. Then we can: 1.

For single carrier cells, we can use downlink LDR:


MOD CELLALGOSWITCH: CellId=0, NBMLdcAlgoSwitch=DL_UU_LDR-1; MOD CELLLDR: CellId=0, DlLdrFirstAction=BERateRed, DlLdrBERateReductionRabNum=1; GoldUserLoadControlSwitch=ON;

2.

For F1 cell, Setting LDR as follows:


MOD CELLALGOSWITCH: CellId=0, NBMLdcAlgoSwitch=DL_UU_LDR-1; MOD CELLLDR: CellId=0, DlLdrFirstAction=BERateRed, DlLdrBERateReductionRabNum=1, GoldUserLoadControlSwitch=ON; DlLdrSecondAction=InterFreqLDHO,

Then we can monitor the counters as follows to check the effect of LDR action: VS.LCC.LDR.InterFreq VS.LCC.LDR.BERateDL VS.LCC.LDR.BERateUL Note: usually power congestion will not happen in dual carrier cell. For single carrier site, if power congestion is serious, expand carrier is recommended.

1) PSC planning to enhance CSSR

RNC having normal CSSR but to improve more PSC audit and change should be done
After the PSC change, CSSR improved.

Below is the cells that had PSC planning on

1) Uplink Power Congestion

Uplink Power Congestion


Main Root Problem: High RAB failures on site 102373_SEKELOA_3G due to uplink power congestion. Analysis : Uplink power congestion was found on site 102373_SEKELOA_3G although parameter ULTOTALEQUSERNUM has been set to 200 (=maximum value)

HUAWEI Confidential

Uplink Power Congestion


Counter Description for LDR State:

HUAWEI Confidential

Action : Disable UL power CAC for cell with high UL power congestion. For any cell with UL power congestion still appear although ULTOTALEQUSERNUM has been set to 200 (=maximum value), we decide to disable UL power CAC by setting NBMUlCacAlgoSelSwitch in UCELLALGOSWITCH to ALGORITHM_OFF.

Uplink Power Congestion


Result : After changing NBMUlCacAlgoSelSwitch setting improvement in uplink power congestion.

HUAWEI Confidential

1) RF Coverage problem Solved Later by Modifying parameters related to cell radius


From DT Found EC/IO and RSCP (little) Poor Near the Cell 080086 which was causing main problems

Solution
According to Coverage Prediction Plot from Atoll we found that there is coverage shrink in the area due to bad cell environment and so planned to change the cpich power

Increase Power CPICH from 330 to 390 RlMaxDlPwr from 0 to 10 for CS Services and 20 to 40 For PS Services for RAB 384 and 256 Kpbs
Service CS Domain 12.2 kbps AMR 28 kbps 32 kbps 56 kbps 64 kbps (VP) PS Domain 0 kbps 8 kbps 32 kbps 64 kbps 144 kbps 256 kbps 384 kbps -2 -8 -4 -2 0 2 4 -17 -23 -19 -17 -15 -13 -11 256 128 64 32 16 8 8 -3 -2 -2 0 0 -18 -17 -17 -15 -15 128 64 64 32 32 RL Max Downlink Transmit Power (dB) RL Min Downlink Transmit Power (dB) Downlink SF

Also

1) RRC Rej and RAB Fail and reason are RRC Rej and RAB Fail due to Code Congetion in WCDMA
KPI Analysis:

Solution:
If HS-PDSCH Reserved Codes value is excessively high, the HSDPA code resource is wasted and the admission rejection rate of R99 services increases due to code resource. so we have change this parameter from 12 to 5 . As I checked the site parameter config. And found Code number for HS-PDSCH is 12. So change it to 5 as per baseline. After reduce the HS-PDSCH Code problem is solved.

20) CS IRAT HO Problem due to LAC miss-configuration [HO]

When we implemented the work order of RNC in one region we got the IRAT HO Success Rate of 24%. After we executed one work order on 69 sites of One RNC in one region we got so many IRAT failures.

BSC6900UCell

IRATHO.FailRelocPrepOutCS.UKnowRNC

Label=UBEN077_S1, CellID=20771 Label=UBEN077_S3, CellID=20773 Label=UBEN007_S2, CellID=20072 Label=UBEN077_S2, CellID=20772 Label=UBEN017_S1, CellID=20171 Label=UBEN038_S3, CellID=20383 Label=UBEN070_S1, CellID=20701 Label=UBEN901_S2, CellID=29012 Label=UBEN039_S1, CellID=20391 Label=UBEN901_S3, CellID=29013 Label=UBEN901_S1, CellID=29011 Label=UBEN017_S2, CellID=20172 Label=UBEN070_S2, CellID=20702 Label=UBEN028_S2, CellID=20282 Label=UBEN025_S1, CellID=20251 Label=UBEN032_S2, CellID=20322

3350 1998 1796 940 874 844 631 507 482 388 327 314 308 255 252 218

1. Checked neighbor data from 3G to GSM Handover in RNC, checks each NGSM cell information, there is no problem in that. 2. Traced singling in RNC using LMT and found many prepare handover failed, the reason is unknown target RNC. What backed it out is that the counters from M2000 that counts are IRATHO.FailOutCS.PhyChFail IRATHO.FailRelocPrepOutCS.UKnowRNC 3. Based on that we have checked the configured LAC in MSC, checked MSC data and find LAI is wrong.

After the LAI modifications in the RNC & MSC we have got The IRAT HO success Rate of 97%

21) How to improve PS IRAT Success rate


3G to 3G and 3G to 2G neighbor list review and optimization

3G-to-2G Handover Measurement Events - 2D QUsed TUsed2d - H2d/2 TUsed2d :


Parameter InterRATCSThd2DEcN0 InterRATR99PsThd2DEcN0 InterRATHThd2DEcN0 InterRATCSThd2DRSCP InterRATR99PsThd2DRSCP InterRATHThd2DRSCP HystFor2D TimeToTrig2D Recommended Value -14, namely -14dB -15, namely -15dB -15, namely-15dB -100, namely -100dBm -110, namely -110dBm -110, namely -110dBm 4, namely 2dB D320, namely 320ms

- Speed up handover to avoid failure due to poor RF by increased INTERRATR99PSTHD2DRSCP from -110 to -100dBm and

INTERRATHTHD2DRSCP from -110 to -105dBm.

- Increase the penalty time PENALTYTIMEFORPHYCHFAIL from 30s to 60s to alleviate 2G congestion and control the number of 3G to 2G handovers ( avoid handover to high congestion 2G cell).

- Adjust parameter INTERRATPHYCHFAILNUM from 3 to 1 to speed up the penalty period after first time physical channel

Parameter ID Parameter Name Meaning

InterRatPhyChFailNum Inter-RAT HO Physical Channel Failure THD

Maximum number of inter-RAT handover failures allowed due to physical channel failu When the number of inter-RAT handover failures due to physical channel failure excee threshold, a penalty is given to the UE. During the time specified by "PenaltyTimeForInterRatPhyChFail", the UE is not allowed to make inter-RAT hando attempts. For details about the physical channel failure, see 3GPP TS 25.331.

3G-to-2G Handover Measurement Events - 3A QOtherRAT + CIOOtherRAT TOtherRAT + H3a/2

QUsed TUsed - H3a/2

TOtherRAT is the absolute inter-RAT handover threshold. Based on different service types (CS , PS domain R99 service, or PS domain HSPA service), th threshold can be configured through the following parameters: TargetRatCsThd TargetRatR99PsThd TargetRatHThd
Parameters Optimization (SET 2) Adjust parameter TARGETRATR99PSTHD and TARGETRATHTHD from -95 to -90 dBm.

- GSM cells that contribute with high failure that affect IRAT success rate, you can decrease its priority by adjusting targe (NPRIOFLAG, NPRIO, RATCELLTYPE).

Conclusion & Recommendations:

>After implemented the actions according to KPI Improvement plan (page 3) , the target KPI : PS IRAT HO Success Rate significant improve from about 85.6% to 94.8 %.

1) R99 PS drop ratio increase after action the 64QAM due to CM on HSPA+ not activated
R99 PS Drop increase after activation of the 64QAM in March 5:

Firstly, activation time is confirmed by RNC operation log:

From counter analysis, we found per RNC that there are nearly 300 drops on PS R99 drop:

And TOP cell has nearly 30 drops R99 PS drop, other cell has several times R99 PS drop:

At the same time, H2D time begins to increase when activation of 64QAM is made:

Analyzing the RNC configuration, find that HSPA+ service is not allowed to start CM:

This configuration will cause 64QAM user in the bad coverage must turn to DCH from HSDPA, then the user starts CM. This is more possible to drop. In the IOS, some user drop after 64QAM UE return to DCH for bad coverage:

Solution

According to the above analysis, HSPA+ service cant support CM, so HSPA+ user in bad coverage return to DCH that causes R99 PS drop ratio increase. SET UCMCF: EHSPACMPermissionInd=TRUE

2) SHO OVERHEAD PROBLEM solved by optimizing event 1B


During working on B project i found problem of SHO Overhead in RNC's is high In Trial Optimisation : I present 2 batches for the optimisation 1st Batch 1.Select Cells where SHO Overhead is high and have high traffic/congestion. 2.Adjust antenna e-tilt to control coverage. If antenna e-tilt is already at maximum then g o to (3). 3.Adjust SHO parameters IntraRelThdFor1BCSNVP and IntraRelThdFor1BPS from 12 (means 6dB) to 10 (means 5dB) to increase probabilities of triggering event 1B and impr ove SHO Overhead If SHO overhead is not Improved then we have to apply 2nd Batch 2nd Batch 1.Select Cells where SHO Overhead is still high Change TRIGTIME1B from 640 to 320 (ms) to further improvement.

After applying above, significant improvement occurred

22) FACH Congestion Reduction by increasing Air + Iub B.W


FACH congestion can be thought to be due to one of the below 3 reasons: a. Air Interface congestion (SF64 where SCCPCH is configured at is the bottleneck) b. Iub Interface congestion (FACH BW which is configured to 4500byte/sec is the bottleneck) c. Both Air and Iub interfaces are bottlenecks

Trial proposed area

ID 67199740

Counter

Description

VS.CRNC.IUB.FACH.Bandwidth FACH Bandwidth of CRNC for Cell

This counter provide the bandwidth of common channels for the CRNC on the Iub interface in the unit of bytes per second. ID 73439970 Counter VS.FACH.DCCH.CONG.TIME Description Congestion Duration of DCCHs Carried over FACHs for Cell Congestion Duration of DTCHs Carried over FACHs for Cell

73439971

VS.FACH.DTCH.CONG.TIME

These counters provide the duration for which the DCCHs/DTCHs carried over the FACHs in a cell are congested. Unit:sec

Step1: Increasing the SF of 2nd SCCPCH from SF64 to be SF32

It got result but not acceptable result Step2: Increasing the Iub of the FACH to be 9000B/s instead of 4500 B/s (on top of SF32)

This solved the problem

23) Soft handover Overhead Reduction using event 1A


It is found that the main contributor to the SPU load is the soft handover. Most of the N odeB are six sector NodeB, therefore, there will be more RL established per UE From network audit analysis, 27% of the SPU load is caused by softhandover

Solution:
Event 1A triggering threshold is reduced to make the event less likely to occur. Below is the command: MOD UCELLINTRAFREQHO: RNCId=XX, CellId=XXXX, IntraRelThdFor1ACSVP=5, IntraRelThdFor1ACSNVP=5, IntraRelThdFor1APS=5; t was changed from default value 6. Below is the result after change:

The soft handover overhead and SPU Load reduced after the change. The SPU load usage reduction more than 10% In addition, the call drop rate have not changed after the changes

Degrade in Paging Success Rate after IU-FLEX implementation Customer in Country M, at office M , reported that there are degradations in Paging Success Rate for 1 RNC, IPRN5. The Paging Success Rate (PSR) for idle UE on RNC IPRN5 was degraded since 14th Sep 2012.

CAUSE ANALYSIS

The problem is shown in Figure 1, where the IU Paging Success Ratio is degraded.

Figure 1 PSR for idle UE

As shown in Figure 2, the RRC successful connection rate stayed almost the same. This indicated that there is nothing wrong wit h the common part which RRC connection and paging share together, including UU interface, NODEB, IUB, some internal modul es of RNC.

Figure 2 RRC successful connection rate

Besides, theres no flow control/ discarded detected, as shown in Figure 3.

Figure 3 CPUSALLVS.PAGING.FC.Disc.Num.CPUs

o o

In addition, from the performance file, there is no PCH congestion found at all, as shown in figure 4, and there is no paging discar ded too. It shows that, the paging message should successfully be delivered from IU interface to UU interface. This conclusion together wi th point 1 indicates the PSR deterioration is not caused by UTRAN.

Figure 4 UCELLALLVS.RRC.Paging1.Loss.PCHCong.Cell

ROOT CAUSE ANALYSIS: PSR for the idle UE on RNC is calculated by the formula:PSR=VS.RANAP.Paging.Att.IdleUE/VS.RANAP.Paging.Succ.IdleUE. The denominator and the numerator are shown in Figure 5.

Figure 5 The denominator and the numerator for PSR

From an hour IU Trace, there is 286 location update failure out of 4042 location update requests in total with the reason shown as Figure 7.All the failure was received from CN.

Figure 6 Location updating failure with different cause FINDINGS:

From the analysis, we could say that after IU-FLEX, repeated paging mechanism could be altered, which could bring in more useless paging attempts. As a result, PSR on RNC is degraded.

Uplink power Congestion analysis and solution


I country M project, as the new construction developing, the network environment ,the type of service and number of users have also changed ,some cells of new UMTS sites uplink power congests,a great impact to the cell KPIs. None From M2000Extract the top issue cell 050076_3G-3 counterVS.RAB.FailEstabPS.ULPower.Cong as below: Time BSC6900UCell VS.RAB.FailEstabPS.ULPower.Cong 2012-10-15 2012-10-16 2012-10-17 Label=050076_3G-3, CellID=37836 Label=050076_3G-3, CellID=37836 Label=050076_3G-3, CellID=37836 220 453 124

We found when the UL power congest, the traffic is a little high,so we reduce CPICH power 1DB to decreases the cov erage ,but we found the UL power still congestion after the revision, we doubt that lack of resources is not the root cau se. We check the current network parameters, found uplink CAC algorithm switch of the issue cell is set to ALGORITHM_ SECOND(The equivalent user number algorithm). Algorithm Content ALGORITHM_OFF ALGORITHM_FIRST ALGORITHM_THIRD Uplink power admission control algorithm disabled. Power-based increment prediction algorithm for uplink admission control. Power-based non-increment prediction algorithm for uplink admission control.

ALGORITHM_SECOND ENU-based admission algorithm for uplink admission control.

If we use ALGORITHM_SECOND,the network performs admission control based on the uplink equivalent number of users (ENU) of the cell and the predicted ENU caused by admitting new users. It means according to the different service types, equivalent to different number of users. When the cell equivalent nu mber of users exceeds the set value(Here is 95), the cell will deny user access.

According the algorithm principle,we use ALGORITHM_OFF to disable uplink call admission control algorithm. After we monitor several days KPI,we found that the KPI can reach the normal level,and there are no abnormal fluct uations with other KPIs

For the uplink power congestion,we could analyze from the following two aspects 1.Lack of resources. a:Check CE adequacy of resources; b:Adjust the coverage.by modifying the pilot power and the maximum transmission power or by RF optimal adjustme nt. 2. Lead to the issue of parameter settings. Adjust cell parameters:as the access control algorithm

Cells Location with LAC Borders


Below mentioned plot shows cells location with less than 98% RRC Registration success rate with LAC borders, most of cells are located on LAC borders / covering in open areas.

Page 1

FACH Power & IdleQhyst2s Trial


FACH Power was changed on B-A LAC border cells from 1 to 1.8dB. Changes were implemented on 20th July night .
RRC Registration has shown slight improvement when compared to last Monday hourly trend. RRC Registration attempts reduced as expected after changing IDLEQHYST2S from 2 to 4dBm, but there was no change in RRC success rate for reg. Changes were Reverted on 20th July before FACH trial changes.

Cluster

Shown below is the Overall TP distribution for X area Cells. As shown in Map these cells are facing in open area with no 3G coverage overlap. Nearly 20.0% of samples lies in >1.5 Km
CS Traffic has increased after swap hence there is no loss of coverage after swap from legacy
Name Cell Traffic Volume, CS / Week Pre-Swap KPI 33,795 Post-Swap KPI 40,479

Page 3

Cause of the problem was attenuation not set and TMA not configured but were physically present on the Site . On investigation we found that the cell having High RSSI were having TMA before Swap . But were not configured in the Huawei System afterwards . Also attenuation needs to be set accordingly . Here is the process and commands to check .

When there is no TMA, the attenuation value is set to 0. When the 12 dB TMA is used, the attenuation value is set between 4 dB to 11 dB. When the 24 dB TMA is used, the attenuation value is set between 11 dB and 22 dB. When the 32 dB TMA is used, the attenuation value is set between 22 dB and 30 dB. This command takes effects immediately after the execution. ATTEN Attenuation of RX Channel(dB) Meaning: It is the value of WRFU/RRU Rx attenuation. GUI Value Range: 0, 4~30 Actual Value Range: 0, 4~30 Unit: dB Default value: Recommended value: None

Post Correction of the Antennuation .Here is RSSI post implementation. So after Swap we should check these to avoid the RTWP issue .

Report for PS RAB Success/UL Power Congestion analysis and Improved by changing CELL Loading Reshuffling CELLULDR Parameters

Detail Analysis:
In Moran RNC on Mosaic project >> PS RAB Success/UL Power congestion noticed and due to which PS RAB get affected. To improve it >> Cell Loading Reshuffling parameters UCELL_UU_LDR changed and due to change PS RAB get OK.

Failures Reason Analysis


Analyse the counter related to PS RABs and it is found that many call are failing on counter : >> Number of Failed PS RAB Establishments for Cell (UL Power Congestion) (none) <<<< On 1 particular Cell >> Eircom Wicklow_1

This counter means that there is UL power congestion in the uplink. Pre KPI attached attached for refernece of Eircom Wiclow_1

Cluster Name

Start time

RAB Setup Success Rate(PS)

Call Setup Success Rate(PS)

Number of Failed PS RAB Establishments for Cell (UL Power Congestion) (none)

Eircom Wicklow_1 Eircom Wicklow_1 Eircom Wicklow_1 Eircom Wicklow_1 Eircom Wicklow_1 Eircom Wicklow_1 Eircom Wicklow_1 Eircom Wicklow_1

11/23/2012 15:00 11/23/2012 16:00 11/23/201217:00 11/23/2012 18:00 11/23/2012 19:00 11/23/2012 20:00 11/23/2012 21:00 11/23/2012 22:00

96.27% 93.45% 54.76% 74.66% 48.18% 69.51% 89.42% 94.21%

96.12% 92.33% 54.33% 71.56% 47.55% 69.10% 89.03% 94.11%

95 208 297 592 653 432 210 117

Action Taken to Improve:


To Improve UL Power congestion>> 1 parameter related to CAC Algorithm is changed:
Make MOD UCELLCAC >>>> UlTotalEqUserNum (UL Total no. of User)>>>from 95 user>>> 200 user

But still after changing this parameter, UL Power Congestion problem did not resolve, there was some improvement but Congestion was there.

So we change CELL LOADING RESHUFFLING PARAMETER


STEPS: A) First SWITCH ON the UL LDR Switch by command: MOD UCELL ALGOSWITCH: CELLID=65361; NBMLdcAlgoSwitch=UL_UU_LDR-1;

B) Change LDR Parameter: MOD UCELLLDR: CELLID=65361; ULLdrFirstAction= BERateRed, ULLdrBERatReductionRabNum=1; GoldUserLoadControlSwitch=ON:

DESCRIPTION OF PARAMETER:

KPI Analysed : KPI analysed after cahnge and Improvement found :


KPI ATTACHED for refernece:

Cluster Name

Start time

RAB Setup Success Rate(PS)

Call Setup Success Rate(PS)

Number of Failed PS RAB Establishments for Cell (UL Power Congestion) (none)

Eircom Wicklow_1 Eircom Wicklow_1 Eircom Wicklow_1 Eircom Wicklow_1 Eircom Wicklow_1 Eircom Wicklow_1 Eircom Wicklow_1 Eircom Wicklow_1

12/08/2012 15:00 12/08/2012 16:00 12/08/2012 17:00 12/08/2012 18:00 12/08/2012 19:00 12/08/2012 20:00 12/08/2012 21:00 12/08/2012 22:00

99.85% 99.79% 100.00% 99.89% 99.92% 99.88% 100.00% 99.97%

99.73% 99.56% 99.64% 99.84% 99.88% 99.67% 100.00% 99.89%

3 0 0 0 0 0 0 0

1.4 UL Power congestion problem get resolved after changing this parameter

Report for PS RAB Failure due to UL Power Congestion and Improved by changing UCELLCAC UL User equivalent number Parameter

Detail Analysis:
In Meteor RNC on Mosaic project PS RAB get degraded on 1 Site,So to improve UCELL CAC UL UE equivalent parameter changed and due to change PS RAB get OK.

Failures Reason Analysis


Analyse the counter related to CS/PS RABs and it is found that many call are failing on counter : >> Number of Failed PS RAB Establishments for Cell (UL Power Congestion) (none) <<<< On 1 particular Cell >> Ashford_MMC_2 This counter means that there is UL power congestion in the uplink. Pre KPI attached attached for refernece of Ashford _2

Cluster Name

Start time

RAB Setup Success Rate(CS)

Call Setup Success Rate(CS)

RAB Setup Success Rate(PS)

Call Setup Success Rate(PS)

Number of Failed PS RAB Establishments for Cell (UL Power Congestion) (none)

Ashford_MMC_2 Ashford_MMC_2 Ashford_MMC_2 Ashford_MMC_2 Ashford_MMC_2 Ashford_MMC_2 Ashford_MMC_2 Ashford_MMC_2

12/5/2012 19:00 12/5/2012 19:00 12/5/2012 20:00 12/5/2012 20:00 12/5/2012 21:00 12/5/2012 21:00 12/5/2012 22:00 12/5/2012 22:00

79.31% 100.00% 86.21% 82.98% 82.61% 87.50% 86.67% 100.00%

79.31% 100.00% 86.21% 82.98% 82.61% 85.00% 86.67% 100.00%

92.23% 92.22% 54.76% 54.23% 43.78% 46.31% 86.73% 87.31%

92.11% 91.79% 54.33% 53.96% 43.68% 46.20% 86.63% 87.31%

47 100 306 590 574 422 102 168

Action Taken to Improve:


To Improve UL Power congestion>> 1 parameter related to CAC Algorithm is changed:

Make MOD UCELLCAC >>>> UlTotalEqUserNum (UL Total no. of User)>>>from 95 user>>> 200 user

DESCRIPTION OF PARAMETER: Impact on Network Performance: If the value is too high, the system load after admission may be over large, which impacts system stability and leads to system

congestion. If the value is too low, the possibility of user rejects may increase, resulting in waste in idle resources.

KPI Analysed : KPI analysed after cahnge and Improvement found :


KPI ATTACHED for refernece:

Cluster Name

Start time

RAB Setup Success Rate(CS)

Call Setup Success Rate(CS)

RAB Setup Success Rate(PS)

Call Setup Success Rate(PS)

Number of Failed PS RAB Establishments for Cell (UL Power Congestion) (none)

Ashford_MMC_2 Ashford_MMC_2 Ashford_MMC_2 Ashford_MMC_2 Ashford_MMC_2 Ashford_MMC_2 Ashford_MMC_2 Ashford_MMC_2

12/15/2012 19:00 12/15/2012 19:00 12/15/2012 20:00 12/15/2012 20:00 12/15/2012 21:00 12/15/2012 21:00 12/15/2012 22:00 12/15/2012 22:00

96.77% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00%

96.77% 100.00% 100.00% 100.00% 93.33% 100.00% 90.00% 100.00%

99.75% 99.83% 100.00% 99.92% 99.89% 100.00% 100.00% 99.89%

99.62% 99.28% 99.75% 99.82% 99.78% 99.58% 100.00% 99.55%

0 0 0 0 0 0 0 0

1.4 UL Power congestion problem get resolved after changing this parameter

Report for CS RAB Failure due to DL Power Congestion and Improved by changing DLALGOSWITCH OFF Parameter

Detail Analysis:
In Meteor RNC on Mosaic project CS RAB get bad of 1 Site, So to improve DLALGOWSITCH OFF parameter changed and due to change>> CS RAB get OK.

Failures Reason Analysis


Analyse the counter related to CS RABs and it is found that many call are failing on counter : >> Number of Failed CS RAB Establishments for Cell (DL Power Congestion) (none) <<<< On 1 particular Cell >> Balliguille Hill_1 This counter means that there is DL power congestion in the downlink. Pre KPI attached attached for refernece of Balligullie Hill _1

Cluster Name

Start time

Call Setup Success Rate(CS)

Number of Failed CS RAB Establishments for Cell (DL Power Congestion) (none)

BallyguileHill_MMC_F1_1 BallyguileHill_MMC_F2_1 BallyguileHill_MMC_F1_1 BallyguileHill_MMC_F2_1 BallyguileHill_MMC_F1_1 BallyguileHill_MMC_F2_1

11/28/2012 18:00 11/28/2012 19:00 11/28/2012 19:00 11/28/2012 20:00 11/28/2012 20:00 11/28/2012 21:00

98.02% 97.17% 96.37% 97.00% 96.87% 96.99%

44 3 17 2 14 3

BallyguileHill_MMC_F1_1 BallyguileHill_MMC_F1_1 BallyguileHill_MMC_F1_1

11/28/2012 21:00 11/28/2012 22:00 11/28/2012 23:00

96.37% 98.00% 99.48%

13 1 2

Action Taken to Improve:


To Improve DL Power congestion>> 1 parameter related to CAC Algorithm is changed: Make DL CAC Algorithm Switch >>>> OFF >>>from>>> Algorithm First state

DESCRIPTION OF PARAMETER: 1. In OFF condition : DL CAC algorithm is disable. In Algorithm First condition: The load factor prediction is ON.

If Algorithm first applied than after reaching load factor, new calls are rejected. While if we disable it than it can take new call. We make it OFF most of the time at the time of more load on site, while Algorithm First is used when we have more sites nearby and reaching certain load/threshold, it can transfer calls to near by BTS

1.3 KPI Analysed : KPI analysed after cahnge and Improvement found :
KPI ATTACHED for refernece:

Cluster Name

Start time

Call Setup Success Rate(CS)

Number of Failed CS RAB Establishments for Cell (DL Power Congestion) (none)

BallyguileHill_MMC_F1_1 BallyguileHill_MMC_F1_2 BallyguileHill_MMC_F1_1 BallyguileHill_MMC_F1_2 BallyguileHill_MMC_F1_1 BallyguileHill_MMC_F1_2 BallyguileHill_MMC_F1_1 BallyguileHill_MMC_F1_2 BallyguileHill_MMC_F1_1 BallyguileHill_MMC_F1_2 BallyguileHill_MMC_F1_1 BallyguileHill_MMC_F1_2

11/30/2012 18:00 11/30/2012 18:00 11/30/2012 19:00 11/30/2012 19:00 11/30/2012 20:00 11/30/2012 20:00 11/30/2012 21:00 11/30/2012 21:00 11/30/2012 22:00 11/30/2012 22:00 11/30/2012 23:00 11/30/2012 23:00

99.42% 100.00% 99.65% 100.00% 99.75% 100.00% 99.04% 100.00% 99.60% 100.00% 99.58% 100.00%

0 0 0 0 0 0 0 0 0 0 0 0

1.4 DL Power congestion problem get resolved after changing this parameter

Phenomenon Description
Hsupa call drop increase after hsupa cm is permitted: Cm permission ind on hsupa is changed from limited to permit
list rnc-oriented cmcf algorithm parameters ------------------------------------------cm permission ind on hsdpa = permit cm permission ind on hsupa = permit cm permission ind on hspa+ = permit

Alarm Information
none

Cause Analysis

Check behavior of all counters in hsupa call drop formula Check expected behavior of the system when cm hsupa is permitted

Time (As hour)

VS.HSUPA. VS.HSUPA. RAB. RAB Release .AbnormRel. Rate 51682 47012 42068 39811 37147 35628 35007 33478 33488 36558 43005 46745 50449 53865 53655 53326 1,02% 1,06% 0,54% 0,62% 0,48% 0,46% 0,47% 0,48% 0,47% 0,59% 0,78% 1,01% 0,92% 1,20% 1,21% 1,20%

2011-04-03 00:00:00 2011-04-03 01:00:00 2011-04-03 02:00:00 2011-04-03 03:00:00 2011-04-03 04:00:00 2011-04-03 05:00:00 2011-04-03 06:00:00 2011-04-03 07:00:00 2011-04-03 08:00:00 2011-04-03 09:00:00 2011-04-03 10:00:00 2011-04-03 11:00:00 2011-04-03 12:00:00 2011-04-03 13:00:00 2011-04-03 14:00:00 2011-04-03 15:00:00

VS.HSUPA. VS.HSUPA. VS.HSUPA. VS.HSUPA. VS.HSUP RAB. RAB. E2D. HHO.E2D. A.HHO. AbnormRel NormRel Succ SuccOut E2D. IntraFreq SuccOut InterFreq 526 47033 4123 0 0 498 228 246 179 165 165 161 158 216 337 472 466 646 649 640 43439 39714 37729 35489 34323 33860 32371 32243 34963 40493 43090 46371 48550 48341 48418 3075 2126 1836 1479 1140 982 946 1087 1379 2175 3183 3612 4669 4665 4266 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

VS.HSUPA .E2F. Succ

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

2011-04-03 16:00:00 2011-04-03 17:00:00 2011-04-03 18:00:00 2011-04-03 19:00:00 2011-04-03 20:00:00 2011-04-03 21:00:00 2011-04-03 22:00:00 2011-04-03 23:00:00 2011-04-04 00:00:00 2011-04-04 01:00:00 2011-04-04 02:00:00 2011-04-04 03:00:00 2011-04-04 04:00:00 2011-04-04 05:00:00 2011-04-04 06:00:00 2011-04-04 07:00:00 2011-04-04 08:00:00 2011-04-04 09:00:00 2011-04-04 10:00:00 2011-04-04 11:00:00 2011-04-04 12:00:00 2011-04-04 13:00:00 2011-04-04 14:00:00 2011-04-04 15:00:00 2011-04-04 16:00:00 2011-04-04 17:00:00 2011-04-04 18:00:00 2011-04-04 19:00:00 2011-04-04 20:00:00 2011-04-04 21:00:00 2011-04-04 22:00:00 2011-04-04 23:00:00

53662 56492 56744 59140 61355 60632 61460 57151 49978 45064 41767 38890 38198 37880 39438 49245 76818 97637 101384 102138 106681 107342 103931 100534 102318 103256 98919 89741 75692 70472 66384 60195

1,25% 1,21% 1,32% 1,16% 1,30% 1,27% 1,19% 1,01% 0,83% 0,50% 0,57% 0,41% 0,33% 0,30% 0,31% 0,52% 0,90% 0,81% 1,16% 1,03% 1,09% 1,08% 1,15% 1,27% 1,22% 1,22% 1,46% 1,48% 1,50% 1,49% 1,50% 1,61%

669 685 749 688 800 771 730 575 413 227 240 161 125 115 124 258 694 790 1172 1054 1165 1156 1194 1275 1249 1257 1443 1325 1138 1049 997 971

48145 50469 50089 52716 54158 53661 54917 51068 46046 42762 40027 37628 37153 36891 38379 47385 72340 90664 97995 100975 105377 106054 102660 99181 100988 101918 97404 88373 74528 69387 65327 59160

4848 5338 5905 5736 6397 6198 5813 5508 3519 2075 1500 1101 920 874 935 1602 3784 6183 2217 107 138 130 75 77 79 79 71 43 25 36 59 62

0 0 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 1 0 1 0 1 2

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 2 1 1 2 1 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Suggestions and Summary


it is important to analyze system behavior after one feature is activated in the network, so we can explain the root cause of abnormal kpi behavior

Case name: Abnormal distribution of VS.RRC.AttConnEstab.Reg

Phenomenon Description In country R during WCDMA optimization project, at the step of RRC CSSR optimization RNO team found abnormal distribution of RRC attempts for registration reason. It takes around 50% of total RRC Attempts. Hardware version is BSC6810V200R011C00SPC100.

Symptoms:

1. High RRC attempts quantity. 2. Abnormal distribution of RRC attempts for registration reason 3. No any hardware alarms.

Analyze sequence:

1. Localize the problem. 2. Analyze possible reasons. 3. Perform Drive Test. 4. Check RNC level parameters.

Analyse Procedure: From statistic for RNC 4016 VS.RRC.AttConnEstab.Reg teaks around 50% of total RRC Attempts Connection Establishment. Attempts are normally distributed among cells.
RNCName RNC:4016 RNC:4016 RNC:4016 RNC:4016 RNC:4016 Time(As day) VS.RRC.AttConnEstab VS.RRC.AttConnEstab.Reg 2011-08-10 791541 414010 2011-08-11 811675 462559 2011-08-12 796428 424042 2011-08-13 815134 446783 2011-08-14 835164 450958

1000000 800000 600000 400000 200000 0

2011-08-14

2011-08-10

2011-08-11

2011-08-12

2011-08-13

VS.RRC.AttConnEstab VS.RRC.AttConnEstab. Reg

RNC:4016 RNC:4016 RNC:4016 RNC:4016 RNC:4016

At the same time for other 2 RNC's no such situation, RRC Attempts with Registration reason are no more than 15%.

Such results exclude problem of CN because all 3 RNCs of this region share same CN. So possible reasons of such situation are: 1. Wrong RNC/Cell Level parameter settings. 2. Bad coverage and frequent reselection of 2G <-> 3G networks. For first reason we use Nastar Configuration Analysis Function to check difference in parameters setting. No any difference. For second reason RNO team decide to perform Drive Test to check coverage and UE behavior. As result found that UE repeat to perform Combined RA/LA update and Location Update every time failed with reason MSC temporarily not reachable. RA Update is performed successfully.

This is root reason why registration quantity is so high. About combined RA/LA: If the optional Gs-interface is implemented and the UE has entered a new LA as well as a new RA, a combined RA/LA update will be performed. From the MS point of view, all signalling exchange takes place towards the SGSN. The SGSN then updates the MSC/VLR. A combined RA/LA update takes place in network operation mode I when the UE enters a new RA or when a GPRS-attached UE performs IMSI attach. The UE sends a Routing Area Update Request indicating that a LA update may also need to be performed, in which case the SGSN forwards the LA update to the VLR. This concerns only CS idle mode, since no combined RA/LA updates are performed during a CS connection. For our network Gs interface is not configured, so we checked Network Operation Mode for PS CNDomain. It

was set to NMO=Mode1. ADD CNDOMAIN:CNDOMAINID=PS_DOMAIN, DRXCYCLELENCOEF=6, NMO=MODE1; For other RNCs it was set to NMO=Mode2. Nastar didnt found configuration difference because its related to CN configuration. After modification of NMO=Mode2 problem was solved and RRC attempts with registration reason decreased to 5% level.

2011-08-

2011-08-

2011-08-

2011-08-

2011-08-

2011-08-

2011-08-

2011-08-

2011-08-

1000000 800000 600000 400000 200000 0

VS.RRC.AttConnEstab VS.RRC.AttConnEstab.R eg

Suggestion: For RAN performance optimization needs to pay attention at whole network structure including Transmission and Core Network. Wrong setting of such global parameter like NMO brings additional UE power, radio resource consumption, additional RNC SPU and CN signalling loading.

You might also like