Risk Lighthouse is Published in the Asian Insurance Review

November 2015


Is the insurance industry keeping up with the rapidly evolving cyber risk landscape?

Messrs Shaun Wang, Mark Terris, and Bradley Schaufenbuel explain how to objectively evaluate cybersecurity risk and price cyber insurance policies more accurately in the rapidly expanding digital economy.

As the digital economy is rapidly expanding, a diminishing percentage of risk professionals and CFOs feel they have a complete understanding of
cyber risk. Insurers also do not entirely understand cyber risk yet. Under writing processes to quantify controls are not fully developed. Therefore, insurers are not effectively pricing cyber risk policies and charging many customers similar rates without thoroughly evaluating the maturity of the underlying IT programs, which could lead to complacent cyber security processes once insurance is in place. The need for better pricing and underwriting processes for cyber insurance is heightened with the continued expansion of the digital economy.

Objective evaluation and pricing

How can insurers objectively evaluate cybersecurity risk and thus price cyber insurance policies more accurately? We, as insurance modeling, risk, and IT security experts conducted collaborative research resulting in the following insights and observations.

There are clear relationships between an organisation’s risk profile, the maturity of its cybersecurity program, and the probability that the organisation will incur losses under a cyber-insurance policy. For the most part, attackers are rational actors. The lower the payoff and the more difficult the target is to compromise, the more likely attackers are to move to an easier and more lucrative target. There are many factors that determine the susceptibility of an insured to incurring cyber losses. These factors include the industry in which they operate, the type, volume, and location of the sensitive information held, the maturity level of the information security program and the inclusion of cyber risk in an ERM program.

We cannot predict with certainty which organisations will suffer a data breach. However, we can certainly determine which organisations are more likely than others to suffer a data breach with a greater degree of accuracy than the rudimentary methods often used by cyber insurance underwriters today. This can be accomplished utilising a more sophisticated and predictive data breach risk model. By marrying the growing body of historical cyber loss data with a model for measuring inherent risk and cyber security program maturity, we can predict likely losses for individual cyber insurance applicants.

Diagram 1

Diagram 1

Diagram 1 illustrates the model at a high level. The inputs into this model include attributes from three primary sources. Inherent risk is determined using factors that are largely outside of the applicant’s control. This includes historical data breach loss data as well as an organisational threat and vulnerability profile. The types of attributes that encompass the organisational profile include the size of the applicant (measured in number of employees, revenues, and market value), the industry the applicant operates in, the number of unique confidential records the applicant has in its possession, etc. Historical loss data is derived from databases of publicly disclosed data breaches and studies conducted by security researchers.
For example, if the applicant maintains 50,000 unique records containing personally identifiable information (PII) and we combine that with the Ponemon Institute’s estimate of the cost of responding to a data breach at US$202 per record, the maximum uncontrolled exposure to the insurer in a data breach response scenario is approximately $10 million.

Program assessment

Controlled or managed risk is calculated by assessing the design and effectiveness of the applicant’s cyber security program and the level of inclusion in the company’s ERM program. Examples include measures of security governance, resource commitment, technological safeguards, administrative controls, security awareness, and physical security controls. Instead of the brief application form on which many cyber insurers currently rely on to assess managed risk, the insurer could leverage better indicators of cyber security program maturity. This information can be obtained from independent third party audit or certification engagements and /or from a questionnaire.
Examples of independent assessments include the ISO 27001 certification, an SSAE-16 SOC-2 Type-2 audit, HITRUST Common Security Framework (CSF) certification, a PCI Data Security Standard (DSS) or a Report on Compliance (ROC). 


The review of independent assessments would be similar to those performed by sophisticated enterprises as part of their third party service provider oversight or vendor management program today. If an applicant does not possess adequate independent assurances, then a more detailed cyber security questionnaire may be used that includes questions that are statistically proven to be indicators of the efficacy of an organization’s cyber security program.
A combination of an independent assessment and a questionnaire is also an option. Instead of underwriting cyber insurance policies based on general loss history and the contents of the application, lower premiums can be quoted to applicants w it h above average cybersecurity programs and higher premiums can be quoted to applicants with below average cybersecurity programs. The ability to offer below market premiums to organisations that have invested in building effective cybersecurity programs will result in a competitive advantage to the insurer utilising this model – both in terms of the insurer’s ability to land business as well as having a lower than industry average for losses over the life of these policies.
In the absence of existing third party audits or certifications, an insurer may wish to validate the applicant’s responses to the questionnaire. They can either add the cost of validating the questionnaire to the premium or give the applicant the option of paying for third party validation. An applicant with an effective cybersecurity program may be happy to pay for the validation if it results in a reduction to the premium that exceeds the cost of the validation.
Furthermore, if insurers (or at least a few major insurers) can agree on the model that is used to assess cybersecurity risk, an entire industry may blossom for independent assessors who complete questionnaires for applications and/or validate them for insurers (or applicants). The emergence of such an industry may be similar to that which arose for Qualified Security Assessors (QSAs) after the Payment Card Industry (PCI) released its Data Security Standard (DSS) for organisations that handle payment card data.

Cybersecurity risk scoring engine

The key element of the model is the cybersecurity risk scoring engine. Each input is transformed into a quantitative value. Each value is weighted depending on its predictive efficacy. Leveraging advanced predictive analytics, a n aggregate risk score is derived for the applicant. The risk score is first used to determine whether or not the applicant is insurable. Assuming that a predefined threshold for insurability is met, the risk score can then be leveraged to determine whether the applicant is quoted a higher than average premium (i.e., a risk premium is applied to the cost of insurance) or a lower than average premium (i.e., a risk discount is applied to the cost of insurance).
To monitor that an insured’s cybersecurity program remains robust throughout the life of the policy, insurers may tie future premium adjustments to the results of periodic reassessments or some form of continuous audit/assessment process. The underwriting pricing process of cyber risks will continue to evolve as more data becomes available and better underlying models are developed to evaluate loss frequency and severity.
Cyber incident data is the most developed for analysis at this time. While a uniformly specific cyber incident repository does not yet exist, incident information is available through public disclosure and operational risk management loss event databases, mostly related to financial institutions. Analysis of this data can provide the foundation for a meaningful loss frequency and severity model.
Insurance claims data is another area rich with information, however, this information is just starting to be culled and organised for usefulness. Due to the lack of a standard policy form for cyber risk and the consequences and impacts of cyber incidents which may be included in other policy cover (e.g. business interruption, liability, fraud, etc.) the timeframe to fully analyze this data will be longer.

Ultimately, through the partnership of expert IT program assessment, meaningful data analysis and model development, the insurance industry can emerge profitably while also creating a more efficient marketplace for cyber insurance cover, and influencing better overall cyber security processes.

Dr Shaun Wang is the Founder and Chairman,

and Mr Mark Terris is the Managing Director of Risk Lighthouse LLC. Mr Bradley Schaufenbuel is Principal - Security Services at Schaufenbuel Advisory Services, Inc.

SIRC Special Commemorative Issue Nov 2015 Download at http://www.asiainsurancereview.com/Archives/Conference-Dailies/SIRC (pages 14 and 15)


November, 2015

The Variance Prize for papers published in Variance volume 8 has been awarded to Jessica Leong, Shaun Wang and Han Chen, for their paper “Back-Testing the ODP Bootstrap of the Paid Chain-Ladder Model with Actual Historical Claims Data.”

The Variance Editorial Board awards the Variance Prize for the best paper published in each volume of the journal.

The winning paper presents a back-test of a popular technique to obtain reserve distributions. By using the data from several hundred U.S. companies, spanning three decades, the authors show that the modeled distributions emerging from this technique can underestimate reserve risk. The paper examines the causes of this problem, and suggests two methods to address it by accounting for systemic risk.

Jessica Leong, FCAS, FIAA, is a predictive analytics execution lead at Zurich Insurance. In this role, she works with the business, ensuring effective execution on predictive analytics projects. Prior to working in predictive analytics, Leong has had roles in capital modeling and reserving. Most recently, Leong was the lead casualty specialty actuary at Guy Carpenter. She was also a consultant at Milliman in New York and Towers Watson in London. Jessica is a Fellow of the Institute of Actuaries of Australia and a current board member of the Casualty Actuarial Society.

Shaun Wang, FCAS, CERA, is Professor at Nanyang Technological University in Singapore. Previous to that position, he was Chairman of Risk Lighthouse LLC. Previously, he was deputy secretary general and head of research at the Geneva Association, and pricing actuary and research director at SCOR Reinsurance Co. (1997-2004). He was professor of actuarial science at Georgia State University’s Robinson College of Business (2004-2013), assistant professor at the University of Waterloo (1994-1997) and Condordia University (1993-1994). Dr. Wang holds a BS degree in mathematics from Peking University and a PhD in statistics from the University of Waterloo.

Han Chen, FSA, ACAS, is lead analyst at Tokio Marine Technologies, where he is responsible for property and casualty reinsurance pricing/reserving tool development and emerging risk study. Prior to joining Tokio Marine Technologies, he led a research team in conducting P&C industry cycle-related analysis and other nontraditional actuarial research for Risk Lighthouse. Han has a bachelor’s degree in mathematics from Fudan University in China and master’s degrees in actuarial science and mathematical risk management from Georgia State University.


Published: Insuring Flood Risk in Asia’s High Growth Markets

August 2015

Dr. Shaun Wang has co-authored (with Dr. Kai-Uwe Shanz) a research report for the Geneva Association, the leading global insurance think tank. The Geneva Association research report analyzes Asia’s growing flood risk exposures in the backdrop of rapid pace of economic development and urbanization. Asia’s rising flood risk exposure is compounded by increasing population and asset value concentrations, as well as the impacts of climate changes. The report also suggests solutions involving insurers and capital markets as well as governments to address this rapid growing threat of flood risks.

Link to the Geneva Association research report:



Published: Cyber Risk in the Financial and Health Sectors

July 2015


Risk Lighthouse has written a position paper on cyber loss and incident data for the financial and health sectors. We have a methodology for scrubbing, scoring, and aggregating cyber loss and incident data and reaping the benefits at the industry and enterprise levels. 

Please click the link below to access the paper. 



The Role of Microinsurance in Disaster Risk Management

July 2015

Dr. Wang co-authored (with Dr. Christophe Courbage of the Geneva Associaiton) an article on “The Role of Microinsurance in Disaster Risk Management.”

The article provides an assessment of the benefits and challenges of microinsurance as a disaster risk management tool.

The article is to be discussed at the 11th International Microinsurance Conference 2015, 3–5 November 2015, Casablanca, Morocco. The 11th International Microinsurance Conference will take place in Morocco from 3–5 November 2015.

Approximately 400 participants and experts from around the world will discuss and identify ways of accelerating growth and economic viability in microinsurance. The conference will be hosted by the Munich Re Foundation, FMSAR and the Microinsurance Network, supported by CIMA, the CEAR, ILO’s Impact Insurance Facility and Making Finance Work for Africa.



  •  Start 
  •  Prev 
  •  Next 
  •  End 

Page 1 of 4

Go to top