You are on page 1of 12

Nexus Mutual – Token Model Stochastic Simulation

May 2019

1 Intro
This document outlines the modelling performed in order to investigate the adequacy of the
parameters for Nexus Mutual’s economic system. The main focus was to stress test the system to
understand the systems limitations rather than provide accurate estimates of future system values.

All results in this report should be read in this context and are not representations by the existing
Nexus Mutual team of future sales, token price or any other metrics.

This is achieved via MATLAB simulations. Each simulation is stochastic, sampling random variables
from distributions in order to create a real-world scenario. As human behaviour is not entirely
predictable, the system is modular, with easily modifiable distributions to enable a number of
parametrisations. The parametrisations are then varied to analyse the outcomes and limits of the
system.

We’re trying to measure the health of the system, largely expressed in an ongoing healthy
capitalisation and token price, with the two closely linked formulaically.

We take two broad approaches to testing:

1. Run a small number of simulations whilst changing a single input parameter within a range
and therefore assessing the sensitivity of the system to changes in this parameter.
2. Run a large number of simulations with fixed input parameters in order to determine the
potential spread of system outcomes.

A simplified overview of the components and initially selected parameters is provided below.

Note that this analysis represents a small subset of what could be investigated and there is no claim
that this report provides definitive answers or proof of the real-life performance of the mutual. We
would welcome community members to comment, request further analysis and build upon the work
done here.
2 Model Set-up and Components
The characteristics and operation of the risk assessment, cover purchase, claims submission and
assessment, token purchases and redemptions are as per the Nexus Mutual gitbook.

Simulation

The simulation loops over days, with a set of events occurring each day according to some probability
distributions, specified below:

1. Each day, there is an amount of NXM staked on the platform. The number of daily stakers is
modelled by
𝑆 ~ 𝑃𝑜𝑖𝑠𝑠𝑜𝑛 (𝜆𝑠𝑡𝑎𝑘𝑒𝑠 )

and the amount of tokens staked by

𝑋 ~ 𝑁𝑜𝑟𝑚𝑎𝑙 (𝜇𝑠𝑡𝑎𝑘𝑒𝑑 , 𝜎𝑠𝑡𝑎𝑘𝑒𝑑 2 )

2. The model further allows for a sample from

𝐶 ~ 𝑃𝑜𝑖𝑠𝑠𝑜𝑛 (𝜆𝑐𝑜𝑣𝑒𝑟𝑠 )

to model the number of covers bought every day.

3. Following the purchase of each cover, the probability of claim submission (based on the Risk Cost)
is uniformly distributed over the Cover Period + 35 days.

4. Each day, there is also a sampling of

𝐵 ~ 𝑃𝑜𝑖𝑠𝑠𝑜𝑛 (𝜆𝑏𝑢𝑦𝑠 )

to determine the token purchases from the platform and

𝑅 ~ 𝑃𝑜𝑖𝑠𝑠𝑜𝑛 (𝜆𝑠𝑒𝑙𝑙𝑠 )

token redemptions to the platform. Each purchase or sale is of size Y (in ETH), which follows a
normal distribution with
𝑌 ~ 𝑁𝑜𝑟𝑚𝑎𝑙 (𝜇𝑡𝑜𝑘𝑒𝑛𝑠 , 𝜎𝑡𝑜𝑘𝑒𝑛𝑠 2 )

The simulation loops over a set number of days, staking tokens, buying covers, checking for claims,
buying and selling tokens, updating the system after each event.
3 Base Scenario
This initial investigation only models one type of contract and cover on that contract. Even though this
does not represent real-world scenarios whereby covers and contracts would be varied, this is not
seen as an issue given the link between cover characteristics and claims outcomes, as well as the ability
to vary cover purchase frequency.

Contract/Cover Characteristics Justification


Age (days) 98 An example contract used in the initial pricing
model. Reasonable time from launch to want
cover.
Gas paid at deployment incl. 5.9m CryptoKitties Core contract – one of the
related contracts (gwei) contracts used in parameterising the pricing
volumes in March 2018
Transactions made 414k CryptoKitties Core contract – one of the
contracts used in parameterising the pricing
volumes in March 2018
ETH Held 303 CryptoKitties Core contract – one of the
contracts used in parameterising the pricing
volumes in March 2018
Transactions 413,889 CryptoKitties Core contract – one of the
contracts used in parameterising the pricing
volumes in March 2018
Cover Amount (ETH) 600 A proxy amount where a smart contract user
would start to worry about losing their input
value
Cover Period (Days) 90 An amount of time after which it is reasonable to
assume no hacks would happen given no hacks
to this point

In order to set the parameterisations of these distributions, we establish a baseline case which we can
then stress within our investigations.

Parameter Base Value Justification


(Proxy) MCR as proportion of 0.04 Reasonable contribution to
Cover Amount capital strain whilst
remaining above capital
requirement implied by SII
Standard Formula
Minimum MCR $5m An amount of capital which
would enable meaningful
writing of cover volumes
Growth Factor 1 + (0.0001 ^ Days^2) Quadratic growth whilst
retaining modelling
capability in reasonable
timeframe
𝝀covers = Mean/Variance of Cover 1 * Growth Factor Start with this amount per
Purchases day, increases over time
𝝁staked = Risk Assessment Staking 1500 ~$1500-$2000, an expected
mean (NXM) early staking amount,
𝝈staked 500 Significant change in Risk
Assessment stake on a case-
by-case basis
𝝀stakes = Mean/Variance of Risk 2/3 * Growth Factor Start with this amount per
Assessment Stakers per day day, increases over time
𝝁tokens = Mean token 6 ~$840, an expected early
purchase(ETH) token purchase amount
𝝈covers = Standard deviation of 1.2 Significant change in token
token purchase purchases and sales on a
case-by-case basis
𝝀tokens = Number of token buyers 1 * Growth Factor Assume that there are as
and sellers per day many buyers and sellers as
there are cover purchasers
based on initial assumption
that speculation is a major
use case.
Valuation NXM Price * Number of NXM The implied valuation of the
platform
𝝀buys Lambda tokens / ((Valuation(t) Number of mean token
/ Capital Pool(t)) * (Capital purchases per day,
Pool(0) / Valuation (0) increasing as capital pool
increases relative to
valuation
𝝀sells Lambda tokens * ((Valuation(t) Number of mean token sales
/ Capital Pool(t)) * (Capital per day, decreasing as capital
Pool(0) / Valuation (0) pool increases relative to
valuation
Days 730 Corresponds to a 2 year time
horizon

Some conclusions have already led to changes in the parameters of the platform. Specifically, a shift
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑇𝑜𝑘𝑒𝑛𝑠
from the previous 𝑁𝑋𝑀𝑃𝑟𝑖𝑐𝑒 = 𝐶
∗ 𝑀𝐶𝑅%2 formula has been replaced by the current
𝑀𝐶𝑅
formula of 𝑁𝑋𝑀𝑃𝑟𝑖𝑐𝑒 = 𝑎 + 𝐶
∗ 𝑀𝐶𝑅%4 as a result of incompatibility with the assumption that
buying and selling of tokens depends largely on the relationship between the valuation of the mutual
and the capital held by the mutual.
4 Investigations

4.1 Single simulation with single variable changes


The purpose of these simulations is to investigate a selection of variables. This applies to both
investigating the impact of changing certain variables and, if the impact is large, whether the selected
variables are appropriate.

One simulation at a time is performed while changing a single parameter. The results are then
compared and a conclusion is drawn as to whether or not the variable is appropriate.

4.1.1 MCR as percentage of Cover Amount

This is a parameter which allows us to check the impact of holding a certain level of solvency capital
(in the form of the non-Best Estimate Liability portion of the MCR) expressed as a percentage of the
total Cover Amount in force. It is a proxy and does not reflect the full capital calculation. However, it
is useful in this context as a measure of the capital strain encountered in writing covers.

Previous to this modelling exercise, this parameter was set at 0.2, reflecting approximately 20% of the
total Cover Amount being held as solvency capital. This was consistent with the original capital model,
which was based on holding the full cover amounts, allowing for SII-style correlation calculations and
finally scaling the results down by multiplying in a 0.3 factor to prevent overcapitalisation.

After some testing, it was revealed that this level of capital is too high and the incoming funds as a
result of writing covers and token purchases would likely not be fully sustainable as evidenced by the
MCR% (“mcrp”) dropping considerably below the 100% threshold rapidly once the fixed minimum mcr
amount was breached.
An investigation was conducted into what the capital requirement would look like if the Solvency II
framework would be followed to the letter. This revealed that at the level where the original capital
model would require $5m of capital, the SII framework prescribed roughly $750k of capital.

Therefore, the reducing factor in the capital model was lowered from 0.3 to 0.06 and the
corresponding mcr / cover amount proxy ratio for the MATLAB simulations was lowered from 0.2 to
0.04.

It must be noted that insurance-type business is traditionally very capital intensive in the early stages
and Nexus Mutual would be no exception. Therefore, we would rely on the community to provide
capital injections as needed assuming there is an obvious need in order to write more covers (an
excellent problem to have!). Eventually, sufficient diversification and increasing incoming premiums
will result in self-sustainability.

4.1.2 Profit Margin

The profit margin is the amount that is charged for covers above the Risk Cost (i.e. the expected payout
for each cover). It is currently set at 30%, which is a fairly typical amount to charge above the expected
claims cost by insurance companies (see Whitepaper). This test both removes the profit margin
entirely and sets it at much higher levels.
The graphs show that increasing the profit margin to 100% or 200% yields a higher capital pool and
hence mcrp ratio. However, in practice this level would not be acceptable to users as the contributions
paid by them to protect themselves against risks would be too high (in many cases this would mean
the cost of cover would be higher than the value generated of e.g. a Compound deposit).

On the other hand, the graphs show that even if we set a 0% profit margin, the performance of the
mutual is not significantly impacted. Note that this assumes that capital contributors (i.e. buyers and
sellers of tokens) continue to act as if they believe in the performance of the mutual and hence
recapitalise it as required. While this is not in practice sustainable in the long term, it would give the
mutual a window to change its pricing basis and parameters.

The test of a lower profit margin can also be interpreted as a check on what happens if our pricing
basis is too light and we end up with higher claims than expected.

4.1.3 Token Buying/Selling Frequency

This test varies the relative frequency of token purchases and redemptions compared to the frequency
of cover buying. It is a means of measuring the extent to which the users are cover buyers as opposed
to capital providers.

Note again that the simulation as a whole assumes that the parameters governing token purchases
and sales (𝜆buys and 𝜆sells) depend on the ratio between the valuation (number of tokens x token price)
and capital resources of the mutual. If the valuation/capital ratio is above the starting value, we expect
more sales than purchases and vice versa.
The mcrp graph here shows that if there are more buyers and sellers of tokens relative to cover
purchasers, the mutual’s capitalisation position is improved. This is consistent with the capital
intensiveness of insurance-type products and the resulting requirement for early-stage injections of
capital into the mutual.

For the “base” case, we have used the 1-1 ratio between cover buyers and capital providers. This is a
reasonably arbitrary selection, but is based on the proposition that mutual participants are more likely
to be capital providers in the early stages relative to the long term.

While it is possible that there would be a short-term capital shortage as a result of over-purchase of
covers, this is seen as a ‘good problem to have’. It would indicate that there is significant demand for
the covers offered and should encourage new providers of capital to enter the mutual. This aspect of
member behaviour has not been captured by the simulation model.

4.1.4 Growth Rate

This investigation reflects varying levels of user adoption of the platform, ranging from 1 +
0.00001 ∗ day^2 mean covers bought per day on average to 1 + 0.001 ∗ day^2 covers, with
corresponding changes in the frequency of staking and token buying & selling. The results of one such
test are shown below:
It can be seen from the mcrp graphs that a crucial turning point is the time when the mcr starts
increasing beyond the minimum amount and, as expected, this point comes earlier when the ‘growth
factor’ is higher. For example, in the case of a growth factor of 1 + 0.001 ∗ day^2, in this simulation
the minimum mcr level of $5m is breached at day 167, at which point there are 29 covers being bought
per day on average. It is at this point that the mcrp decreases as the mutual is in a pattern of writing
new covers, but no capital is yet being released from previous covers being removed from the books.
Over time, as previous covers start coming off the books, combined with the effect of recapitalisation
(see sections 4.1.1 and 4.1.3) the mcrp recovers over time.

If the growth pattern is very slow (e.g. 1 + 0.00001 ∗ day^2 covers per day), the minimum mcr
threshold is not breached for a long time and therefore the mcrp largely depends on the claims
experience rather than a rapidly increasing capital requirement.

A demonstration of NXM price growth in the same scenarios is displayed below.


It can be seen that the higher the adoption of the platform, the quicker the NXM price grows – this is
largely a result of the MCR component dominating in the formula longer-term as the MCR%
component hovers converges near 100%.

Note that there were also investigations conducted with non-quadratic growth patterns (e.g. a flat
ongoing rate of cover buying / staking / token purchases and redemptions), but these were deemed
less likely than the scenarios described above.
4.2 Multiple simulations with set variables
The final step of the initial investigation was to conduct 1000 simulations (with the probability
distributions and “base” parameters as listed in the sections above) representing the first two years
of the mutual following enabling of cover sales and observing the spread of outcomes.

The results were as follows:

Of 1000 simulations, 40% of outcomes after 730 simulated days had an MCR% below 100%, with 279
simulations ending up in the range of [0.9972 1.001] – very close to the starting value of 1.

This is largely a function of the 𝜆buys and 𝜆sells parameters as investigated in section 4.1.3, and other
parameters could easily have been chosen. It seems a reasonable outcome given the future
uncertainty of platform participation, early users may want to withdraw value when they are able to
do so. This is reflected in the outcomes here. If the parameters were set differently, with more buy
force than sell force, reflecting different member behaviour the results would centre around a
different point.

Based on the parameters chosen, the projected NXM price was projected to be between 0.045 and
0.052 ETH in all simulations – a between 57% and 81% rise from the selected starting point of 0.0287.
[Note: these values do not correspond to the actual price metrics used in the live system, which will
be set just prior to launch].

As MCR% centres around 100% with these parameters, the driving force behind the increase in NXM
𝑀𝐶𝑅
value (formula 𝑁𝑋𝑀𝑃𝑟𝑖𝑐𝑒 = 𝑎 + 𝐶
∗ 𝑀𝐶𝑅%4) is the extent of user adoption of covers, expressed in
the growing level of MCR.

Therefore, it is expected that if we were to change the adoption parameters in this simulation
(specifically the Growth Factor as outlined in Section 3 above), the NXM price outcomes would react
similarly – this is as intended by design.
5 Limitations and Future Components
This section lists some of the more obvious limitations of the investigation and proposes some
improvements for the future.

Limitations

• There is only one set of contract and cover characteristics being investigated. These
parameters could be varied to obtain a broader range of results. However, it was deemed
unnecessary for the initial purposes given the established link between the risk cost and the
claims outcomes.
• There is a proxy used for the capital requirement (i.e. the MCR), expressed as a percentage of
the total cover amount, rather than the full detailed capital model used in practice. This proxy
was originally established using the capital model spreadsheet used as the basis for the
platform implementation, then stressed as per section 4.1.1.

Possible Improvements

• Once the platform is operational, covers are being bought and claims begin to come through,
claims experience should be tracked, analysed and a basis set for each type of cover sold by
the mutual. Subsequently, this basis should be implemented into the simulations.
• A similar process should be followed for analysing the relative patterns of cover purchases,
risk assessment staking and the buying/selling of tokens on the platform. The relative
relationships, once experienced in practice and documented, can be more accurately
reflected in the simulation.
• A typical use of stochastic models of this type in insurance is to model a range of scenarios
and establish the number of cases in which the mutual remains solvent. This is closely linked
to having an accurate depiction of the capital model in the simulation.

You might also like