Q1) Section-A (Part 1)
Privacy and security of health information are utmost importance in today’s technologically-advancing world. The rapid growth of electronic health records usage along with the use of smartphones, tablets and patient portals are changing how we look at our healthcare. The laws and regulations that protect our information also have to be change.
These new technological advancements bring great challenges as well. Technologies like Remote Patient Monitoring uses to forecast the future of the healthcare systems with the help of connected heart monitors, implanted sensors, and wearables. And these technologies are designed to simplify and transform the way patients currently approach, access, and receive care.
And the concept calls Data Liquidity: means the patients’ data flowing over the cloud, from the cloud provider to clinician and again back to the patient. However, protecting this patients’ data is one of the main challenges. Because data generating and transferred to the desired destination is a long journey, data recorded on wearable devices and implanted sensors which is highly personalize data these data firstly go through the secure home networks from devices to applications over the internet. Breaches have the potential to occur at any point along the journey
Nature of the Data: Nature of the data itself is another challenge because the personal nature of the information along with the long shelf life, makes health records highly important to the hackers. Because it consists of patients’ medical historical data, policy numbers, billing information and etc. On the other hand, this information is permanent therefore hackers will press it in to their advantage by selling it on the black market.
According to McKinsey, nearly 45 percent of U.S. Hospitals are participating in local or regional Health Information Exchanges. Standards to protect data security have been established as part of the Health Insurance Portability and Accountability Act (HIPAA), but that can change and businesses are hard pressed to keep up. In fact, a recent study done by Infinite Convergence found that 92 percent of healthcare institutions are not HIPAA compliant. Standards are rapidly changing based on needs and new threats, and companies unable to operate by those standards are left all the more vulnerable to attacks.
An emerging danger facing physicians and payers is malicious hacking through programs like ransomware, software that blocks an organization’s access to its own computer system until a sum of money is paid. Ransomware already has demonstrated the ability to install on wearable devices as well as laptops, systematically eating away stored data. According to a study conducted by the Institute for Critical Infrastructure Technology, “Ransomware attacks on healthcare organizations will wreak havoc on America’s critical infrastructure community.” (Source: HIPAA- McKinsey)
Q1) Section-A (Part 2)
Internal threats in healthcare organizations create greater risk, which directly hits on the organization and make them suffering. These threats are mainly 2 types,
• Intentional Threats: These threats done with a particular purpose which can be due to dissatisfaction with job or manager or any kind of organizational matter. Also due to use intellectual property to start their own businesses as well as they can sell those data to third parties those who are willingly to buy for use data in numerous ways.
Ex: At Children’s Healthcare of Atlanta, Inc. v. McCray. Sharon McCray was a senior audit advisor for Children’s Healthcare of Atlanta. McCray started to send healthcare records of patients from her corporate email to her home email on the day that she resigned. When caught, she told the hospital that she had emailed the data for “future employment reference.” (Source: INFOSEC Institute web site)
• Unintentional: mistakenly disclosure of data, improper disposal of data records
Cyber-attacks are most common in the healthcare industry as an example:
“Stolen patient health records can fetch as much as $363 per record, according to data from the Ponemon Institute, which is more than any other piece of data from any other industry,” notes a recent INFOSEC Institute article. The story goes on to state that more than 29 million Americans have had their health information hacked since 2009. (Source: Mathew Chambers Article)
How to overcome internal and External Attacks?
In the modern world this is not an easy task. Internal threats are extremely difficult to recognize because of the nature of the problem. Which can be breakdown of trustworthiness of truthfulness. Therefore, require special kind of strategies identify the internal threats which cannot be done using firewalls or anti-virus programs.
First involvement is monitor the employees daily work which is also not easy task and with latest technology brings greater challenges too.
Defense and depth Strategy: To protects the data on several levels by using beyond the traditional security to manage privileged access and protect their data with a few steps as follows.
Manage Access: Different types of employees have different level of privileges to access the system. So the network monitoring also important because not all the data accessible for all the employees. Sensitive data like managing director’s salary is not accessible for any other department other than the HR department authorized persons. (Automation of the Data Access Management is important)
Record the all user activities: Helps to recognize who, when, what they have done and their accountability for that.
Recognize what data is vulnerable to attacks: “Prevention is better than a cure” by recognizing which data would cause the most damage if exposed in a breach and what has to be protected by law what an attacker most likely to target.
In the healthcare industry this allows HR departments to meet accreditation standards, complex compliance regulations, best practices, reduce organizational liability, and reputation management. Self-Policing of staff also important and using modern analytics tools we can recognize the behavior of the employees by analyzing them.
Be-Realistic: Well established health information security system aware about what works and what doesn’t therefore prevention of all unauthorized access can be done. Large health system like Baylor Scott ; White have large number of users and as well as devices in many locations which is not easy to handle and secure implementing varies steps to recognize the unauthorized access of the users will able to reduce the damage to the system.
Privileged Access: Privileged identity Management to data loss prevention to advance authentication access governance and etc.
Up-to-date Security: Firewalls and encryption up-to-date and current on all devices throughout the organization is crucial to keeping weaknesses from being exploited. There’s a reason manufacturers issue patches and make updates.
Internal and External Assessments: A good information security team is like a team of security guards constantly making rounds, checking the doors and locks, and probing for weaknesses. Being proactive in seeking out vulnerabilities is vital to discovering what needs to be fixed and where and how to deploy resources.
Q1) Section-A (Part 3)
If the Hospital wants to make a partnership with an Indian Hospital, Sri Lanka Health Care Hospital has to consider below scenarios as well. Mainly we have to understand the global vulnerabilities of healthcare sector in all around the world. Unlike the other industries there were no huge investment on healthcare data security but hackers recognized these vulnerabilities and they press it in to their advantage. As a result of that health care data is in a risk.
Unlike Sri Lanka in India threats to healthcare data security are more real and complex. According to their study, about 90% of healthcare organizations have suffered at least one data breach in the last past two years. (Source: ETHealthWorld)
In this kind of situation, Data security is a huge challenge in India the growth of the healthcare sector is unprecedented rate. It’s around $55 billion by 2020, by making India the sixth largest market globally. There is a huge risk of protecting the patient files, prescription records, diagnostics data, insurance records and billing details.
Data Encryption: According to the above information, we have to consider more on data encryption. We have to start looking for data encryption and key management solutions as secure against data threats. Implementation of Technologies like,
• AES-NI: Advanced Encryption Standard Instruction set is a data protection technology introduced by Intel Corporation. One of the best encryption technique that we can implement.
• SED: Self-Encrypting Drive- In this technology encryption process is done through the use of a unique and random Data Encryption Key which uses to both encrypt as well as decrypt the data.
Which generally improves the Operating System performance and processor speeds also.
Data Level: Different levels of the management as well as departments should have their own set of privileges in both of the organization. Accessing privileges also can make huge impact on the system.
Authentication: Here requires a strong Authentication because, between these two different organizations need to protect health data as well as control access to it.
• Access Control: related to the healthcare authentication requires to implement technical policies and procedures that allow only to authorize persons to access.
• Audit Controls: Hardware, software and procedural techniques have to be implement to record and examine access and other activity in the system.
Authorization: Different entities (doctors, nurses, patients) of the organizations often uses different degrees of authorization access for specific parts of the system information.
Q1) Section-A (Part 4)
Moving to cloud brings numerous advantages to the hospital in different ways. However, health records are sensitive data there for data security is a challenge for the health care industry. These health records are more valuable communication tools which supports clinical decision-making.
We have concern patients’ privacy as well as confidentiality of the health records. In this digitalize world devices like tablets, smart phones and varies of web-enabled uses in our daily life which is keep on expanding also impacts to our daily life routine. If the hospital need to implement new technology, then they have to consider health data are safer and accessible in order to treat patients.
Storing health data on the cloud become a trend in the modern world because cloud-based storage providers have highly secured infrastructure and they are liable for the protection of the data.
Cost Management: According to this hospital facility and it’s scale they have to involve for more on data security, storage, disaster recovery and etc. It will be a large investment for the hospital and the maintenance cost will be very high. But If they get service from cloud-storage expert providers they can save large amount of money. They can get the service by paying a service charge.
Security: Reputed cloud services providers have highly secured sufficient infrastructure to store data in the cloud. We can encrypt our data before uploading as it is a good idea enough to be on the safe side. To make sure that our data remains safe from hackers, we can write a small script that reads the file in binary and encrypts it with a secret key that only we know. When we wish to download it back, we will have to use the same key to decrypt it. There will be an agreement which tells the cloud providers’ liability of data privacy and security and if something went wrong on cloud providers side they are liable to claim for the organization(Hospital).
Flexible with varies of features: Hospital can discuss with the cloud-provider for further improvement of varies of features and facilities that would be helpful for further improvement of cloud services.
Disaster Recovery: There will be no huge risk at all with the cloud storage, business continuity is more important to the healthcare sector. Cloud collaboration and information-sharing between providers and patients are essential for timely diagnosis and quality of care.
Real Time and Remote Access: Doctors will be able to share files in real-time as well as remotely to consult with other specialists and their colleagues with zero compromise on patient privacy by using a secure cloud-based file sharing. This enables healthcare providers to improve their response times through efficient and safe collaboration.
Q2) SECTION-B (Part-1)
Significance of High Availability and Disaster recovery for Bank Industry
When we talk about the high availability in Banking Industry in the current world financial services are globally as well as 24 hours and 7 days’ full time consistent access with the best performance in transaction processing as possible is required and there is simply no time for system downtime or delay. When we focus on credit and debit card transactions: The trading applications are required processing in nanoseconds to authorization as well as process the transactions in real time. And there can be any small amount of system downtime could be possible.
Significance of Disaster Recovery: In banking industry any small amount of system downtime can be effect on large number of transactions which can be few seconds. However, continuity of the services is extremely important in the industry.
The best example can be given as: On September 11,2001, the terrorist attack destroyed the World Trade Centre in New York. Which is a highly Considered financial area and it ruined the financial system. The banks were located in world Trade center faced an unprecedented disaster. The companies’ back-up facilities which were too close to the primary facilities were disrupted as the primary facilities. Single points of failure in perceived diverse routing resulted in failed back-up communications systems. Due to this attack there is a significant increment of disaster recovery planning, which gives a birth to a new industry called – Disaster Recovery Industry (Source: Robert Bronner ,1997)
The Objective of Disaster recovery planning is to protect the bank to minimize the loss during a disaster it is a critical approach to banks. Disaster Recovery Planning divide in to the following measures,
• Preventive Measures: Preventive Measures focus on Stop the disaster before happening by identify the risks before it happens and minimize the risk. By Data backed up and off site, using surge protectors and etc.
• Detective Measures: To recognize the any kind of unwanted events among the IT infrastructure. Also can be focus on unfound new threats. This implementation consists fire alarm installation, up to date anti-virus softwares, network monitoring softwares and etc.
• Corrective Measures: Actions which can be taken after a disaster to restore the system back.
This concept will move forward in banking industry when the new technology arrives it will adapt with that and provide more effective solutions for the future. Disaster recovery plans which are properly tested and committed to by senior management, bank can manage effectively maintain operations while providing for the safety of people and assets. The ultimate object of this process is continuity of the banking system what ever happened.
Q2) SECTION-B (Part-2)
High Availability and Disaster Recovery(HDAR) for SQL Server in Microsoft Azure Virtual Machines
Microsoft Azure virtual machines (VMs) with SQL Server can help reduce the cost of high availability and disaster recovery (HADR) database solution. Azure virtual machines support most of the SQL Server HADR solutions, both as Azure-only and as hybrid solutions. In an Azure-only solution, the entire HADR system runs in Azure. In the hybrid configuration, part of the solution runs in Azure and the other part runs on-premises in our organization. The flexibility of the Azure environment enables you to move partially or completely to Azure to satisfy the budget and HADR requirements of your SQL Server database systems.
It is up to us to ensure that our database system possesses the HADR capabilities that the service-level agreement (SLA) requires. The fact that Azure provides high availability mechanisms, such as service healing for cloud services and failure recovery detection for Virtual Machines, does not itself guarantee you can meet the desired SLA. These mechanisms protect the high availability of the VMs but not the high availability of SQL Server running inside the VMs. It is possible for the SQL Server instance to fail while the VM is online and healthy. Moreover, even the high availability mechanisms provided by Azure allow for downtime of the VMs due to events such as recovery from software or hardware failures and operating system upgrades.
HADR deployment architectures
SQL Server HADR technologies that are supported in Azure include:
• Always On Availability Groups
• Always On Failover Cluster Instances
• Log Shipping
• SQL Server Backup and Restore with Azure Blob Storage Service
• Database Mirroring – Deprecated in SQL Server 2016
Azure-only: High availability solutions
We can have a high availability solution for SQL Server at a database level with Always On Availability Groups – called availability groups. You can also create a high availability solution at an instance level with Always On Failover Cluster Instances – failover cluster instances. For additional redundancy, you can create redundancy at both levels by creating availability groups on failover cluster instances.
As an Examples: Availability groups: Availability replicas running in Azure VMs in the same region provide high availability. You need to configure a domain controller VM, because Windows failover clustering requires an Active Directory domain.
Failover cluster instances: Failover Cluster Instances (FCI), which require shared storage, can be created in 3 different ways.
Azure: Disaster Recovery Solutions
• Database Mirroring: Principal and mirror and servers running in different datacenters for disaster recovery. You must deploy using server certificates because an active directory domain cannot span multiple datacenters.
• Backup and Restore with Azure Blob Storage Service:
Production databases backed up directly to blob storage in a different datacenter for disaster recovery.
• Replicate and Failover SQL Server to Azure with Azure Site Recovery:
Production SQL Server of one Azure datacenter replicated directly to Azure Storage of different Azure datacenter for disaster recovery.
Q2) SECTION-B (Part-3)
Microsoft SQL Azure Supports T-SQL which is a set of programming extensions Sybase and Microsoft which add several features into SQL, including transaction control, exception handling, row processing as well as declared variables. However, SQL Azure only implements a subset of T-SQL. Most of the Core and crucial of T-SQL features like cursors, triggers and transactions are implemented but there are some T-SQL features does not implement like trace flags, Common Language runtime, system tables, distributed transactions and queries and data base mirroring are some of them.
Any way if our database requires above missing features, we have to rewrite the database or need to wait until SQL Azure support the features we required or best solution can be provided as create a virtual machine in MS-Azure with a full installation of SQL Server. Then we can avoid most of the limitations.
We can’t run agent-jobs: which is one of the missing of T-SQL feature. Because it does not support on SQL Azure therefor we cannot run job via an agent. Bandwidth should have the ability to handle the data transfer to upload or synchronize the data to an Azure instance over the network. But when we come to cloud environment we require remote-synchronization mechanisms.
The database connection can be disconnected under heavy loads in some common scenarios for connections being killed include transactions that use more than a gigabyte of logging, consume more than one-million locks or which run for more than 24 hrs.; or sessions that use more than 16 MB for more than 20 secs or which idle for more than 30 mins. SQL Azure’s throttling and limiting behaviors have been discussed in other venues as well. However, the better-tuned your application, the less likely it is to be throttled.
An extra cost can be generated via SQL reporting. It is another feature in SQL Azure and which charge at a rate of 88-cents per hour per reporting instance. To avoid over charges maximum 200-reports can be generated per hour.
When we talk about Azure’s data chargers: Normally Azure is free for moving data-into and data-out of the SQL Azure per se, and also to inbound data transfers. Any way they will charge for outbound data, 12-cents per GB for the first 10TB every month. For any Windows-Azure-VPN connections to Azure, in either directions charges 5-cents per hour need to keep that in mind if we decide use Azure’s VPN facility for transfer data securely into SQL Azure.
There can be different other chargers later also however it depends upon the service level agreement if we make changes to the services as well as some features which are available in preview mode can be free for some particular time but afterwards they might be charged a fee.
Q3) SECTION-C (Part-1)
SQL Tuning: We use SQL statements to retrieve data from the databases. However, there are n number of ways to get the same kind of results. But SQL Tuning focuses on optimizing the querying performance.
As an example: Select cust_id, cust_name, age, gender from customer_info;
Without writing: Select * from customer_info;
Which focuses on what we really desired to get avoiding retrieving entire table exactly.
Indexing: Focuses on retrieve the rows quickly form the existing tables in SQL. If there are thousands of records it will consume lot of time to retrieve data from a table therefore creating indexes on columns which use to access frequently, which improves the data retrieval time. Indexes also created single column wise and group of columns. When an index is created, it first sorts the data and then it assigns a ROWID for each row.
As an Example: Create an Index:
CREATE INDEX abc
ON customer_info (cust_id, cust_name…)
Partitioning: Also another way of query optimization which focuses only a subset of the data partitions in a table required to access a answer particular query.
A partitioned table uses a data organization scheme in which table data is divided across multiple storage objects, called data partitions or ranges, according to values in one or more table partitioning key columns of the table. Data from a table is partitioned into multiple storage objects based on specifications provided in the PARTITION BY clause of the CREATE TABLE statement. These storage objects can be in different table spaces, in the same table space, or a combination of both.
Q3) SECTION-C (Part-2)
Network monitoring is the information collection function of network management. Which gather data for the network management applications. Also there are 3 goals of network monitoring: Performance Monitoring, Fault Monitoring and Account Monitoring.
• Colasoft Capsa Network Analyzer is an Ethernet network analyzer for network monitoring and troubleshooting purposes. It constantly monitors your network to look for unusual activity and analyzes data packets for malicious or potential problematic issues. The program employs a safeguard network which is specifically focused on security issues so anything your firewall is not dealing with; you should be able to pinpoint with it.
? Record Network Profile, Boost Working Efficiency
? Set Your Analysis Objective, Perform Customized Analysis
? Powerful Customizable Alarms
? Replay Analysis, Reproduce History Network Events
? Custom Protocol, Analyze Unique Protocol Traffic
? Enhanced, Customizable Report
• Microsoft Network Monitor: Network diagnostic tool that monitors local area networks and provides a graphical display of network statistics. Network administrators can use these statistics to perform routine trouble-shooting tasks, such as locating a server that is down, or that is receiving a disproportionate number of work requests. Note that unless you are familiar with network protocols, you will need to read the extensive online documentation before using this program.
• Wireshark: It is a protocol analyzer focus on helping users to troubleshooting, analysis, software and protocol development. It Also an open source and analyze data packets to monitor for problems in network traffic or identify the connection problems.
• Pandora FMS: Also a network monitoring software which allows monitoring in a visual way the status and performance of several parameters from different OS, servers, applications and etc.
Q3) SECTION-C (Part-3)
The purpose of the use of Indexes to speed-up query process in SQL server with higher performance. We also use indexed in books to find specific topic or lesson in a particular page. In the databases we use indexes to do the same task. It avoids go through on all the records to find the results and retrieve.
This process creates a direct access to result that we desired.
There are two types of Indexes in SQL Server:
1. Clustered Index
2. Non-Clustered Index
• Cluster Index
Clustered indexes sort and store the data rows in the table or view based on their key values. These are the columns included in the index definition. There can be only one clustered index per table, because the data rows themselves can be sorted in only one order.
• Non-Clustered Index
Non-clustered indexes have a structure separate from the data rows. A Non-clustered index contains the Non-clustered index key values and each key value entry has a pointer to the data row that contains the key value.
Physical ordering of the data of the column Logical ordering of the data of the column. It is more like a pointer to data.
As it’s a physical ordering of data any column of given table so we can have only 1 clustered index on any table. Can have any number of non-clustered index on the table.
By default primary keys of the table have clustered index. Can be used with unique constraint on the table acting as a composite key.
Clustered index can improve the performance of data retrieval. It should be created on columns which is used in joins ,where and order by clause.
When we write a Create index query it creates a non-clustered index on the columns.
Q3) SECTION-C (Part-4)
How we handle our data system more effectively? When we focus on the physical structure of the data system we have to recognize components of it. It shows the data flow through the physical components. Improving performance of the data system to effective decision making provides feedback on daily operations. Ultimately achieve a specific goal.
Organizational data system physical components can be categorized in advanced as,
• Database and Software
• Input and Output
• Drivers: Keep up to date drivers in the system which improve the performance. New updates of the drivers bring lots of errors and bug fixing and new advancements to the system.
• Hardware upgrades: Extend the Ram capacities, install new processes systems like latest: Work stations processors with higher cache memories –Intel Xeon processors, AMD Opteron……. and etc. Which improves the processing capabilities too.
Databases: DBMS patch updates and upgrading database
Will be able to solve lots of existing issues include lots of bug fixing and advance features.
• Fixing the security issues: Most of the databases are vulnerable to cyberattacks so new threats are keeping coming in the modern world with new advance technologies if the databases are not up to date then it creates a huge opportunity for hackers to enter to the system. Therefore, databases have to be up to data as possible. New upgrades reduce the risk.
• Enhanced Features: Many upgrades come with numerous additional features which improve the productivity of the features which makes the system much quicker and more reliable.
• Storage Efficiencies: New Upgrades mostly capable of managing the storage effectively.