Tuesday, May 31, 2022

Iris Publishers-Open access Journal of Robotics & Automation Technology | The Cyber Vulnerability in Automation of Material Handling and Logistics Systems

 


Authored by MD Sarder*,

Abstract

The level of automation in material handling and logistics industries has increased significantly in recent years. This increase in automation and integration is driven by customer expectations, technology shifts, and pursuit for perfections among others. The material handling and logistics industries not only become effective and efficient, but also became competitive in the market Place. On the other hand, this increased automation exposes vulnerability to cyberattacks. The frequency and impact of cyberattacks on businesses doubled in the last five years and expected to triple in the next five years. Cybersecurity breach poses a dynamic challenge to businesses and threatens their smooth operations and competitive advantage. Study reveals that one in three small businesses do not have the resources in place to protect themselves. Some businesses are more vulnerable to cyberattacks that others, but none is spared from potential attacks. Businesses need to be strategic in cyber defense and create a resilient system that minimizes the impact of cyberattacks. This paper mainly focuses on challenges faced by cybersecurity and how businesses, especially the material handling and logistics should do to address those challenges.

Introduction

Cybersecurity is the ability to prevent, defend against, and recover from disruptions caused by cyber-attacks from adversaries. The cyber-attacks have been classified as passive and active attacks [1,2]. Passive attacks are difficult to detect and are mainly used on confidential data. The passive attacks have been classified as eavesdropping and traffic analysis. Active attacks are classified as masquerade, replay, message modification and denial of service. The hackers use malware to Penetrate into a system and breach the critical data like customers’ payment and personal details. Cyber breaches are increasing every year affecting the confidentiality, integrity, and availability of data [2,3]. The material handling supply chain systems are becoming markedly vulnerable to cyberattacks. Over time, material-handling devices are connected with corporate networks, so they can integrate and share information across the enterprises. This helps the companies to monitor and manage operations remotely, but it also increases the chances of cyber-attacks. When the system is broadly networked, it can be accessed by a malware. Many companies manage external vendors where information sharing and accessing is involved. This can generate vulnerabilities especially if the processes are automated. The company should take up measures like mapping the data flow in supply chain, planning a comprehensive risk assessment, aligning with emerging standards, and setting clear expectations in all supply chain contracts. Some of the impacts cyber-attacks can have on businesses are:

• Altering the installation settings can cause physical damage to the equipment.

• Changing the production settings can lead to defective products which will result in loss of profit.

• Malfunction in the installation of the equipment may lead to release of harmful pollutants in the industry site and the surroundings.

• Theft of confidential data like manufacturing secrets and customer information may be a risk to the company.

Cybersecurity has become increasingly critical for any industries including logistics and material handling. Today, the stakes are higher than ever, as most companies operate on some kind of technology [3-5]. Technology has become more than a supplement to a company’s operations, and hence cybersecurity became a daily necessity.

Cybersecurity impacts for material handling and logistics based technologies should be viewed with the same level of scrutiny as a typical IT infrastructure for any organization. The fact that the focus of this technology is on IoT based devices, and often not typically associated with “sensitive” information thus is not a target of cyber criminals, is naïve. Information technology resources (hardware, software, networks, data, and people) should always be assessed to the impact of the organization with the common principle of confidentiality, integrity, and availability (also known as the CIA Triad) (Figure 1) [6-8].

irispublishers-openaccess-Robotics-Automation-Technology

Confidentiality does not mean that all data within an organization needs the highest level of protection. It is up to each organization to determine the value of the data and have it classified. Data that is required to be protected by law or is valuable to the competitive advantage of an organization, such as intellectual property, should have proper controls in place to protect them from unauthorized disclosure. The integrity of the data is the assurance that only those authorized to add or modify the data can do so. Of course, every organization would want their data to be accurate, but certain functions within an organization are more critical than others to ensure they are accurate. IT resource availability is critical, especially in manufacturing when the process is halted, and product cannot be produced. The reliability of systems for some processes may be more important than others and understanding the risks and developing redundancy is important.

The primary focus in this paper is to identify cybersecurity challenges and how companies, especially the material handling and logistics companies should do to address those challenges. In addition, this paper discusses cybersecurity in general, cyber security framework, potential impact of cybersecurity breaches, and implications of cybersecurity on material handling.

Significance of Cybersecurity

Because it hurts their bottom-line. The frequency of cyberattacks and costs associated with cyberattacks are increasing at a higher pace. According to a recent survey of 254 companies, the average cost of a data breach in 2017 is $11.7 million [9]. The cost went up from $7.2 million in 2013 Figure 2. Costs include everything from detection, containment, and recovery to business disruption, revenue loss, and equipment damage. A cyber breach can also ruin a company’s reputation or customer goodwill. The cost of cybercrime varies by country, organizational size, industry, type of cyberattack, and maturity and effectiveness of an organization’s security posture. The frequency of attacks also influences the cost of cybercrime. It can be observed without statistics that cybersecurity incidents have exploded. 23 Million security breaches were recorded globally in 2011 and by 2013 it hiked to 30 million, a 12.8% annual growth [9]. It has been reported that every year the cost of cyber-crimes is increasing at the rate of 23% per year. On an average it is costing the industries US $11.7 million. The number of successful breaches per company each year has risen to 27% which is approximately 102 to 130 [9]. There has been an increase in ransomware attacks from 13% to 27% [9]. Information theft is the most expensive consequence of cyber-crime. There has been a rise in the cost component of information theft of 35% in 2015 to 43% in 2017. The average cost of malware attack costs around $24 million [9]. It has been analyzed that companies spend most on detection and recovery. It usually takes approximately 50 days to resolve a malicious insiders attack and 23 days to resolve ransomware attack [9] (Figure 2).

irispublishers-openaccess-Robotics-Automation-Technology
irispublishers-openaccess-Robotics-Automation-Technology

With each year there is a significant amount of increase in number of security breaches that happen globally. The large number of attacks may put companies in risk with sensitive information and data, but also can put companies at risk for increased costs from the attacks or even preventative measures. According to the average increase per year percentage of security breaches, by the year 2021, the number of attacks will nearly be reaching 70 million. Organizations must acknowledge that their core operations whether they are logistics or material handling are the equivalent to any other IT systems for any organization. It runs on hardware, software, operating systems, databases, and networks. Thus, it requires the same, if not greater attention and resources that critical systems in other organizations receive. Malware and Webbased attacks are the two most costly attack types [9] (Figure 3).

irispublishers-openaccess-Robotics-Automation-Technology

Information security principles need to be assessed with all systems. This starts with senior management of the organization supporting the resources to ensuring security. Establishing policy and a risk governance structure to these systems. Once this has been created, a formalized program following a commonly accepted risk framework such as NIST or ISO will provide the guideline necessary to securing any systems. Cybercrime detection and recovery activities account for 55 percent of total internal activity cost (35 percent plus 20 percent), as shown in (Figure 4) [9].

Implications of Cybersecurity on Material Handing and Logistics Industries

Study reveals that financial sector is the top target for cyberattacks followed by utilities, aerospace and defense, and technology sectors. Manufacturing, logistics, and transportation sectors attract medium cyberattacks while communications, education, and hospitability sectors are least vulnerable to cyberattacks. Figure 5 shows the cost of cyberattacks by industry sectors in 2017 [9] (Figure 5).

irispublishers-openaccess-Robotics-Automation-Technology

Material handling and logistics industry groups include Automated Storage/Retrieval Systems, Automated Guided Vehicle System, Conveyors and Sortation, Cranes, Electrification and Controls, Hoists, Lifts, Loading Dock Equipment, and Software Systems [10]. Almost all these systems relate to a bigger system when in real time operation. For example, an Automated Storage and Retrieval System (AS/RS) is a combination of equipment and controls that handle, store and retrieve materials as needed with precision, accuracy and speed under a defined degree of automation. This AS/RS system can be an extremely large, computer-controlled storage/retrieval systems totally integrated into a manufacturing and distribution process. In general, AS/RS consists of a variety of computer-controlled methods for automatically depositing and retrieving loads to and from defined storage locations [10]. AS/RS system includes Horizontal Carousels, Vertical Carousels, Vertical Lift Modules, and/or Fixed Aisle (F/A) Storage and Retrieval Systems, the latter utilizing special storage retrieval machines to do the work needed to insert, extract and deliver loads to designated input/output locations within the aisles being served.

Another example of material handling system is Automated Guided Vehicle (AGV). An AGV consists of one or more computercontrolled wheel-based load carriers that runs on the plant floor without the need for an onboard operator or driver. AGVs have defined paths or areas within which or over which they can navigate. Navigation is achieved by any one of several means, including following a path defined by buried inductive wires, surface mounted magnetic or optical strips or alternatively by way of inertial or laser guidance.

The AGVs or any other devices within material handling and logistics industries are smart devices and can be connected through Internet of Technologies (IoT) into an integrated system. Any parts of this interconnected systems is vulnerable to cyberattacks. Cyber criminals can exploit this vulnerability and take control of individual device, part of a system, or the whole system and create substantial damage including service disruptions, data loss, equipment damage, other property loss, or injury to people. No one should take the risk of cybersecurity on material handling systems lightly.

Current Challenges and How to Address Those Challenges

Companies are facing ever-increasing challenges of cyberattacks. In many cases, they are struggling to cope up with those challenges as they are adopting new technologies, operating on web-based applications, working with multi-level constituents, and operating in a competitive environment. Other challenges include lack of skilled manpower, lack of awareness of cybersecurity, lack of readiness due to financial commitment. Following sections highlight some critical challenges and how to respond those challenges.

Dependence on mobile and web-based technologies

Among others customer expectations, efficiency of operations, supply chain visibility, and convenience are driving companies to rely on increasing use of web-based and mobile technologies. This dependence creates vulnerable online targets. Due to a growing number of online targets, hacking has become easier than ever. In customer transaction, usage of mobile devices and apps have exploded. According to a 2014 Bain & Company study, mobile is the most-used banking channel in 13 of 22 countries and comprises 30% of all interactions globally [11]. In addition, customers have adopted online/mobile payment systems, which is vulnerable to cyberattacks.

Enacting a multi-layered defense strategy can reduce vulnerability. This ensures that it covers the entire enterprise, all endpoints, mobile devices, applications, and data. Where possible, companies should utilize encryption and two- or three-factor authentication for network and data access. Some institutions are utilizing advanced authentication to confront these added security risks, allowing customers to access their accounts via voice and facial recognition. Companies invest the most on network layer (online/mobile) protection compared to protection of any other layers. Figure 6 shows the percentage of 2017 spending [9] of companies to protect various layers of security vulnerability (Figure 6).

irispublishers-openaccess-Robotics-Automation-Technology
Proliferation of internet of things (IoT)

Internet of things (IoT) is a concept of integrated network where a wide array of devices, including appliances, equipment, automated guided vehicles, software systems, and even buildings, can be interconnected primarily through internet connections. Due to IoT, all these components become smart and subject to cyberattacks. One of the recent MHI articles [12] on “Truck Takeovers?” highlighted the vulnerability of devices when they are connected with other systems. IoT revolves around machine-to-machine communication; it’s mobile, virtual, and offers instantaneous connections. There are over one billion IoT devices in use today, a number expected to be over 50 billion by 2020 [11]. The problem with wide network of interconnected devices is that many cheaper smart devices often lack proper security infrastructure and creates multitude of access points. When each technology has high risk, the risk grows exponentially when combined. Multiple access points also increase the vulnerability of cyberattacks. Again, enacting a multi-layered defense strategy that protect the entire enterprise, all endpoints, mobile devices, applications, and data is necessary.

Systems vs individual security

No companies are working in isolation. They interact with suppliers/vendors, investors, third party logistics providers, freight forwarders, insurance providers, and many other stakeholders. Figure 7 shows a simplified Cloud based vendor-managed system where a system of companies are sharing information with each other. If any of these parties is hacked, the individual company is at risk of losing business data or compromising employee information. For example, the 2013 Target data [11,13] breach that compromised 40 million customer accounts was the result of network credentials being stolen from a third-party heating and air conditioning vendor. A 2013 study indicated [13] that 63% of that year’s data breach investigations were linked to a third-party component. Transportation vehicles and their monitoring system was hacked in 2015. About 1.4 million vehicles were impacted by the cyber security related recalls [14] (Figure 7).

irispublishers-openaccess-Robotics-Automation-Technology

The paramount priority is to ensure the security of whole system alliance instead of focusing on individual company. Performing a third-party vendor assessment or creating servicelevel agreements with third parties can significantly reduce the vulnerability of the whole system. Companies can implement a “least privilege” policy regarding who and what others can access and create a policy to review the use of credentials with third parties. Companies could even take it a step further with a service level agreement (SLA), which contractually obligates that third parties comply with company’s security policies. The SLA should give the company the right to audit the third party’s compliance.

Information loss and theft

Critical information such as trade secrets, operation data, tools & techniques, and customer data provides competitive advantage. Loss or theft of sensitive and confidential information as a result of a cyber-attack is detrimental to the companies. Such information includes trade secrets, intellectual properties (including source code), operational data, customer information and employee records. The loss or theft of this data not only incurs direct costs, but also involves dealing with lost business opportunities and business disruption.

Companies should deploy extensive data encryption techniques and continuously backing-up data. This can help to safeguard against ransomware, which freezes computer files until the victim meets the monetary demands. Backing up data can prove critical if computers or servers are locked for various reasons. In addition to backing up data, companies should patch and whitelist software frequently. A software patch is a code update in existing software. They are often temporary fixes between full releases of software. A patch may fix a software bug, address new security vulnerability, address software stability issues, or install new drivers. Application whitelisting would prevent computers from installing nonapproved software, which are usually used to steal data.

Lack of cybersecurity awareness and readiness to address

Despite major headlines around cybersecurity and its threats, there remains a gap between companies’ awareness of cybersecurity, potential consequence of cyberattacks, and company readiness to address it. In the last year, hackers have breached half of all U.S. small businesses. According to the phenomenon Institute’s 2013 survey [11], 75% of respondents indicated that they did not have a formal cybersecurity incident response plan. Sixty-six (66%) percent of respondents were not confident in their organization’s ability to recover from a cyberattack. Further, a 2017 survey [13] from cybersecurity firm Manta indicated that one in three small businesses do not have the resources (skilled manpower, security system, tools, and money) in place to protect themselves. As mentioned earlier in this report, that most of the cyber-attacks are targeted to financial companies, but manufacturing, logistics, and service companies are not spared from these attacks. According to the same study, in 2013, 88% of the attacks initiated against financial companies are successful in less than a day. However, only 21% of these are discovered within a day, and in the post-discovery period, only 40% of them are restored within a one-day timeframe [13].

Real-time intelligence is a powerful tool for preventing and containing cyberattacks. The longer it takes to identify a hack, the more costly its consequences. To gain real time intelligence, companies must invest in enabling security technologies including the following:

• Security intelligence systems

• Advanced identity & access governance

• Automation, orchestration & machine learning

• Extensive use of cyber analytics & user behavior analytics

• Extensive deployment of encryption technologies

• Automated policy management

• Innovative systems such as block chain

Innovative technologies are Evolving and their full benefits are still unknown, but companies should be on the forefront of adopting new technologies. As the application and utility of block chain in a cybersecurity context emerges, there will be a healthy tension but also complementary integrations with traditional, proven, cybersecurity approaches [15]. Companies are targeting a range of use cases that the block chain helps enable from data management, to decentralized access control, to identity management.

Conclusion

Cybersecurity has become an essential part of business life. It poses a dynamic challenge to companies and threatens their smooth operations and competitive advantage. The increasing attention to the dangers of cyberattacks is on the rise, but unfortunately majority of the companies are not well equipped to address the issue. Despite increased attention around cybersecurity and its threats, there remains a gap between companies’ awareness of cybersecurity, potential consequence of cyberattacks, and company readiness to address it. High magnitude of potential financial impact of cybersecurity continually compelling companies to be resilient, invest in security defense, and address this from a system perspective rather than an individual company perspective.

Among others, companies face critical cybersecurity challenges as they are adopting new technologies, operating on web-based and mobile applications, working with internal and external partners, and operating in a competitive environment. Other challenges include lack of skilled manpower, lack of awareness of cybersecurity, lack of readiness due to financial commitment. While these challenges are difficult, companies can minimize the impact by deploying tactical and strategic initiatives including enacting a multi-layered defense strategy, extensive encryption techniques, securing access points, creating service-level agreements with third parties, and invest in security technologies. Addressing cybersecurity challenges not only prevent business disruptions, but also improves competitive advantages.

To know about Open Access Publishers

Iris Publishers-Open access Journal of Anatomy & Physiology | Weak Beliefs, Strongly Held: Challenging Conventional Paradigms of Maximal Exercise Performance

 


Authored by Evan Peikon*,

Abstract

It is widely believed that peak cardiac output and total body hemoglobin content are the dominant and deterministic pathways that account for the vast majority of interindividual variability in VO2max. This article presents a case that VO2max represents the maximum integrated capacity of the cardiovascular, pulmonary, and muscular system, and that ‘limiting’ factors for VO2max can vary between individuals.

Introduction

It is popularly believed that the dominant and deterministic pathways that account for the vast majority of interindividual variability in VO2max are well known and center on total body hemoglobin content and peak cardiac stroke volume, and as a result, cardiac output [1]. Some go as far as to assert that VO2 max improvements are determined by an increase in stroke volume and relatively preserved oxygen-carrying capacity of the blood [2,3]. This paradigm emerged as a result of Archibald Hill’s work in the early 1900’s. Undoubtedly, Archild Hill’s work contained many partial truths, but its partial validity should not mask its shortcomings. There are certain instances where other factors can become the “weak link” in the transport and utilization of oxygen. One example is in elite athletes with high maximal cardiac outputs. The decreased transit time of red blood cells in the pulmonary capillary can also lead to a pulmonary diffusion limitation [4,5].

According to the late philosopher Karl Popper, a theory or conjecture can only belong to the empirical sciences if it is falsifiable. According to this criterion, a theory is falsifiable if it is refutable. Naturally, it follows that a theory is refutable if there exists at least one potential falsifier [6]. Although many logical refutations have been made to the idea that cardiac output is the dominant factor contributing to interindividual variations in VO2max, it is still the dominant paradigm in exercise physiology. According to Tim Noakes, this belief has straight-jacketed exercise physiology for the past sixty-two years [7]. If scientific observations don’t agree with the dominant theory, the theory is meant to be abandoned. At least, that is how it is supposed to happen. In practice, people are very reluctant to give up a theory in which they have invested a lot of time, and effort into or dominant paradigms remain due to path dependency, as has often been the case in exercise science [8,9]. According to the late physicist Stephen Hawking, “[scientists] usually start by questioning the accuracy of the observations. If that fails, they try to modify the theory in an ad hoc manner. Eventually, the theory becomes a creaking and ugly edifice. Then someone suggests a new theory in which all the awkward observations are explained in an elegant and natural manner” [10].

This paper reviews the evidence that the central cardiovascular systems’ ability to transport oxygen to the tissues is not the principal determinant of VO2max, but one of a handful of potential physiological rate-limiting factors that can limit VO2max in an individual. Additionally, this paper asserts that in addition to the central cardiovascular system, other deterministic pathways accounting for interindividual variations in VO2max are the pulmonary diffusion capacity for oxygen and carbon dioxide, which is largely impact by the fatigue resistance of the diaphragm muscle, as well as the metabolic capacity of skeletal muscle, among other secondary factors which will be discussed further in this paper. These assertions should not be interpreted as attacks on tradition. In order to see further than scientists of the past we must stand on their shoulders, pay them respect, and learn from them. We do not honor the past, however, when we cling to its conventions in the face of disconfirming evidence.

Classical Views of VO2max

What is VO2max and how is it measured?

VO2max refers to the maximum rate of oxygen consumption measured during intense exercise. VO2max can be measured in absolute liters of oxygen consumed per minute (L/min) or relative to weight in milliliters of oxygen consumer per kilogram of body mass per minute (mL/Kg/min). The concept that there exists a finite rate of oxygen transport from the environment to the mitochondria of exercising muscles began with Archibald Hill and Hartley Lupton [11]. Since then, VO2max has become one of the most ubiquitous measurements in all of exercise science. VO2max is a physiological characteristic bounded by the parametric limits of the Fick Equation, which states the following: VO2max = Q*[CavO2], where Q stands for cardiac output, which can be calculated as stroke volume multiplied by heart rate and Ca-vO2 represents the arterio-venous oxygen difference [12].

The best-accepted method for measuring VO2max is the cycle ergometer ramp test completed to exhaustion, though other modalities can be used effectively. This test involves exercising at an intensity that increases every few minutes until the participant reaches volitional failure at a maximal exertion point. During this test, a participant will wear a face mask to measure the volume of gas concentrations of inspired and expired air. It’s important to note that an individual’s maximum attainable rate of oxygen consumption will vary slightly based on the specific protocol they are tested with and the modality they are tested on. As a result, it’s important to conceptualize VO2max as a range of values rather than a single discrete number for each individual.

Classical views of VO2max emphasize its critical dependence on convective oxygen transport to the working muscles. Yet, there is little discussion of how the pulmonary or local muscle oxidative capacity may impact VO2max. This belief that central factors, such as stroke volume and cardiac output, are the primary limiters of VO2max has become so entrenched within exercise physiology that these underlying assumptions are rarely questioned. This isn’t to say that central factors are not of high importance in predicting an individual’s VO2 max. That would be disingenuous based on the large body of literature suggesting otherwise, which I will analyze in the next subsection section of the paper. However, I believe there is good reason to believe that other factors can, and do, limit VO2 max and performance in both novice and elite athletes, which will be discussed later on.

Classic views regarding limitations to VO2max

As previously discussed, VO2max is calculated by the Fick Equation, which states the VO2max = Q*[Ca-vO2] where Q stands for cardiac output, which is calculated as stroke volume multiplied by heart rate and Ca-vO2 represents the arterio-venous oxygen difference [12]. Although oxygen transport to the skeletal muscle is a product of both blood flow and arterial oxygen saturation, the latter has been dismissed as a potential limiting factor in healthy athletes. Instead, the dominant paradigm is that central factors primarily constrain VO2max and that cardiac output, and more specifically stroke volume, is the most critical physiological or structural component of VO2max in humans [1].

Despite the fact that heart rate is also a prime contributor to cardiac output, the fact that heart rate is similar among young humans has been used to assert that stroke volume is the most important factor contributing to inter-individual differences in VO2max [1]. This makes sense given that the enlargement in cardiac dimension, improved contractility of the heart, and an increase in blood volume are all common cardiovascular adaptations to exercise training, all of which allow for a greater filling of the ventricles and consequently, increased stroke volume [13]. Additionally, the thoracic pump can also function to increase stroke volume, thereby linking changes in breathing to increases in cardiac output as well. When one takes a deep breath in, there is an immediate decrease in intrathoracic pressure, which decreases central venous pressure. When central venous pressure drops, it creates an increased driving pressure, which promotes greater venous return. Because the cardiovascular system is a closed-circuit, any increase in venous return will ultimately increase cardiac output. In the case of the thoracic pump, an increase in venous return causes an increase in end-diastolic volume, stroke volume, and, subsequently, cardiac output.

There is also evidence that changes in blood hemoglobin concentrations and hemoglobin mass will impact the central factors that constrain VO2max. For example, Per-Olof Åstrand showed a close relationship between total Hb mass and VO2 max such that the differences between adults and children and between men and women were primarily due to differences in total hemoglobin [14]. Additionally, it has been shown that an acute reduction in Hb concentration, even when blood volume is maintained, results in lower endurance performance due to a decreased oxygencarrying capacity of the blood [15]. Conversely, an increase in Hb concentration is associated with enhanced endurance capacity and is also proportional to the increase in the blood’s oxygen-carrying capacity [15]. Because increases in blood volume will also thereby lead to an increase in end-diastolic volume, ejection fraction, and stroke volume, there is a clear association between increases in Hb concentration and blood volume and an increase in VO2max (Figure 1).

Counterevidence to central factors as the dominant & deterministic limiters to VO2max

There is substantial evidence that central factors, namely maximal cardiac output, is a limiting factor for VO2max [16]. However, the presence of one limitation does not mean that VO2max cannot be limited by other factors like the pulmonary system or oxygen utilization within the working skeletal muscle or that there aren’t cases where improving maximal cardiac output does not improve VO2max. Simply put, the existence of one phenomenon does not disprove the presence of another. Similarly, the efficacy of one training method that has shown to be efficacious for improving VO2max, like high-intensity interval training, does not mean that different exercise prescriptions will not also show an improvement in the same variable [17]. George Brooks eloquently presented this idea when he said, “It is wise to note that we are all individuals and that whereas physiological responses to particular stimuli are largely predictable, the precise responses and adaptations to those stimuli will vary among individuals. Therefore, the same training regimen may not equally benefit all those who follow it” [18]. There is evidence that the inter-individual variability in response to a specific training method may have to do with what physiological systems are best developed at the time of the workout in an individual and what their limiting factor for VO2max is.

As early as the early 1900s, it was speculated that other factors limit oxygen delivery to the working muscle than the circulatory system. According to Hill, Long, and Lupton, “In running the oxygen requirement increases continuously as the speed increases, attaining enormous values at the highest speeds; the actual oxygen intake, however, reaches a maximum beyond which no effort can drive it. The oxygen intake may attain its maximum and remain constant merely because it cannot go any higher owing to the limitations of the circulatory and respiratory system” [19]. Since then, there has been additional evidence supporting the belief that the pulmonary system can be a limiting factor in maximal effort exercise. For example, in elite athletes with very high maximal cardiac outputs, the decreased transit time of red blood cells in the pulmonary capillaries can lead to a pulmonary diffusion limitation. This was demonstrated in 1965 when the former mile world record holder Peter Snell performed a maximal treadmill step test, where he finished with a SpO2 level of 80% [20]. Additionally, this finding was later confirmed by Dempsey et al. and Powers et al. when they showed that arterial oxygen desaturation occurs in some highly trained endurance athletes and they when these subject’s breath hyperoxic gas mixtures, their hemoglobin saturation and VO2max increase [21,22]. It has also been shown that arterial desaturation occurs in intermediate to advanced Crossfit competitors when performing maximal step tests and sport-specific competitions [23]. This data suggests that pulmonary gas exchange may contribute significantly to the limitation of VO2max in highly trained athletes who exhibit exercise-induced reductions in SpO2 at sea level, as well as the fact that a healthy pulmonary system may become a socalled ‘limiting’ factor to oxygen transport and utilization as well as CO2 transport and elimination during maximum short-term exercise in the highly trained.

According to the Fick equation, every change in VO2max is matched by a concomitant change in maximal cardiac output or arteriovenous difference [24]. One mechanism by which impaired pulmonary diffusion would limit VO2max would be by lessening the arteriovenous difference. If that reason holds, then a widening of the arteriovenous difference, in individuals with a pulmonary limitation, should be accompanied by an increased VO2max, which has been shown to occur [22]. Additionally, an oxygen extraction limitation may be present, which would also truncate the arteriovenous oxygen difference. As a result, an improvement in oxygen extraction would be accompanied by an increase in VO2max in individuals with impaired oxygen extraction due to an increase in the arteriovenous difference [25].

It’s important to consider that improvements in VO2max from increased maximal cardiac output and a widened arteriovenous difference are not independent phenomena. As alluded to previously, it is not a question of ‘either, or’, but a question of which variable is the primary ‘limiting’ factor in oxygen transport and utilization in an individual. This has been demonstrated by Skovereng, et al., where it was shown that VO2max was increased through both an improvement in peak cardiac output as well as a widened a-v¯O2 diff, which were attributed to cardiac remodeling and mitochondrial biogenesis respectively [26].

When we analyze the VO2max literature through the privileged lens of twenty-first-century scientific insight, the traditional view that maximal cardiac output is the dominant and deterministic limiter to VO2max longer fit. The collapse of the traditional paradigms conceptual foundations leaves a void, yet simultaneously creates opportunities to re-evaluate conventional doctrines and evolve more nuanced perspectives. As a result, I propose we make an affordance of David Poole’s definition of VO2max and redefine the term as the maximum integrated capacity of the pulmonary, cardiovascular, and muscular systems to uptake, transport, and utilize oxygen, respectively [27

Re-Envisioning Traditional Paradigms of Exercise Limitations

VO2max as a measure of integrated capacity

In this last section, I suggested that we re-define the term VO2max to mean the maximum integrated capacity of the pulmonary, cardiovascular, and muscular system to uptake, transport, and utilize oxygen [27]. This is in opposition to the traditional definition of VO2max, which is the maximum rate of oxygen consumption measured during intense exercise. The latter is a reductionist take on a complex variable, whereas the former is more holistic. In spite of the fact that there is mounting evidence that VO2max can be limited by multiple different physiological variables including pulmonary diffusion capacity for oxygen, maximal cardiac output, peripheral circulation, and metabolic capacity of skeletal muscle, most coaches and physiologists still do not hold this view [28]. Instead, most coaches and physiologists believe that the central cardiovascular system’s capacity to transport oxygen to the working muscles is the principal determinant of VO2max.

This paradigm emerged as a result of Archibald Hill’s work in the early 1900’s. Archild Hill’s work undoubtedly contained many partial truths, but its partial validity should not mask its shortcomings. It is crucially important to remember that Archibald Hill formulated his hypothesis based on a small number of measurements, specifically of expired respiratory gases [11]. He included no measurements of cardiovascular function or detailed respiratory function, nor did he take any measurements of skeletal muscle, metabolic, or contractile function. An unfortunate consequence of this is that generations of exercise scientists have been taught that you can simply use respiratory gas analysis to give you answers on the factors that limit human performance, but I believe this inherited wisdom is incorrect. For example, in Hill’s quantitative estimates, he calculated that arterial blood would be 90% saturated during allout exercise, and mixed venous blood would be 10-30% saturated, and these values would be generalizable to all exercising athletes [16]. This assumption leads one to assume that the arteriovenous difference would be nearly fixed, which would lead to the natural conclusion that cardiac output would be the primary determinant of VO2max, as Hill asserted. As citizens of the 21st century, we have the privilege of information and past technological innovations that Hill would not have access to, like the ability to measure both arterial oxygen saturation (SpO2) and muscle oxygen saturation (SmO2). As a result, we know that there is quite a bit of variability in athletes’ arterial oxygen saturation levels during maximal effort exercise as well as the ability to utilize oxygen in the working muscles, which means that there is a range of arteriovenous oxygen differences that can occur [29,23,30]. This opens the door for us to explore other limiting factors for VO2max other than maximal cardiac output like pulmonary diffusion limitations or skeletal muscle oxidative capacity limitations [31,24]. These variations in individual rate-limiting factors for VO2max can explain why different athletes’ responses to standardized training programs can be remarkably diverse [32]. Many of these inter-individual variations can be observed with technologies like near-infrared spectroscopy.

The Future is NIRS

NIRS stands for near-infrared spectroscopy. NIRS is a technology that allows one to measure in vivo oxidative metabolism in human skeletal muscle. A NIRS device consists of a light source emitting two or more wavelengths in the near-infrared range of 650-1000nm and a detector placed at a known distance from the light source. Since near-infrared light is able to penetrate biological tissues with less scattering and absorption than visible light, it offers many advantages for imaging and quantitative measurements. These quantitative measurements depend on the physics principle of reflectance, which is outlined in the Beer- Lambert law. This law states that certain materials attenuate the transmission of light at specific wavelengths, and when this equation is adapted to match the properties of human muscle, it allows one to measure changes in oxygenated and deoxygenated hemoglobin concentrations within a given muscle. This is made possible because the chromophores hemoglobin and myoglobin are oxygen carriers in the blood and skeletal myocytes, respectively, and their absorbance of near-infrared light depends on whether they are in an oxygenated or deoxygenated state [33]. As a result, NIRS measurements can be used to reflect the balance of oxygen delivery to the working muscles and oxygen consumption in the capillary beds [34]. This makes NIRS a very useful tool for assessing two of the major determinants of exercise capacity, which are oxygen delivery and oxygen utilization, respectively [33].

According to Tim Noakes, the belief that oxygen delivery alone limits maximal exercise performance has straight-jacketed physiology. Thus, performance during both maximal and submaximal exercise has been explained exclusively in terms of oxygen transport, and local muscle intrinsic factors have largely been ignored [7]. At the time when VO2max testing procedures were conceived, the ability to measure local muscle intrinsic factors was limited, but with the increased accessibility of NIRS devices, we can now measure in vivo oxidative metabolism, which has massive implications for exercise testing. For example, it was previously believed that the arteriovenous difference was nearly fixed, which leads one to make the natural assumption that maximal cardiac output accounts for the vast majority of inter-individual differences in VO2max [11,16,19]. At the time when this assertion was first made, it was not possible to measure the oxygen concentration of mixed venous blood. However, NIRS technology makes that possible. In figure II, we see two NIRS trends from two competitive Crossfit athletes performing a 30-second maximal sprint on an exercise bike. We see that the athlete on the left is only capable of desaturating local muscle oxygen saturation down to 37%, while the athlete on the right is capable of desaturating the working muscles down to 3% muscle oxygen saturation [23]. This suggests that the former athlete has an oxygen extraction limitation, which truncates their arteriovenous oxygen difference. As a result, improved oxygen extraction in this individual would likely result in an increase in VO2max due to an expanded arteriovenous oxygen difference. However, the later athlete is incapable of expanding their arteriovenous oxygen difference, and as a result, the vast majority of improvements in VO2max would come through increased maximal cardiac output for this individual [23] (Figure 2).

irispublishers-openaccess-Anatomy-Physiology

irispublishers-openaccess-Anatomy-Physiology

Conclusion

These findings and many others, strongly suggest that there are many instances where factors other than maximal cardiac output can become the ‘weak link’ in the transport and utilization of oxygen, and subsequently, VO2max. I suspect that innovative coaching practices, both in the past and present, have already incorporated dimensions of what I’ve noted here into elite training ethos and systems. Importantly, however, such practices have been driven primarily by coaching intuition and experience. Such coaching innovations and systems sit outside conventional training theory and remain ignored in the endurance training literature. As a result, it is often said that science follows the best coaches by decades. This paper is my best attempt to pick up the breadcrumbs left by innovative coaches and present a theory that can be further tested and expanded upon in the future.

To read more about this article...Open access Journal of Anatomy & Physiology

Please follow the URL to access more information about this article

https://irispublishers.com/apoaj/fulltext/weak-beliefs-strongly-held-challenging-conventional-paradigms-of-maximal-exercise-performance.ID.000502.php

To know more about our Journals...Iris Publishers

To know about Open Access Publishers

Monday, May 30, 2022

Iris Publishers-Open access Journal of Rheumatology & Arthritis Research | Corrected QT Interval in Systemic Sclerosis Patients

 



Authored by Alexandru Caraba*,

Abstract

Introduction: Cardiac involvement in patients with systemic sclerosis represents an important cause of morbidity and mortality. Electrocardiographic abnormal findings are identified in 25-75% of SSc patients, being considered an independent predictor of mortality. In SSc, even without cardiac symptoms, QT and corrected QT (QTc) intervals appear prolonged, which can lead to life-threatening tachyarrhythmias. The aim of this study was to assess the QTc interval in SSc patients, and, on the other hand, the correlations between QTc and nailfold capillary findings in these patients.

Material and methods: This case control study was performed on a group of 22 patients with SSc, who fulfilled the 2013 ACR/EULAR Classification Criteria for Systemic Sclerosis and 22 healthy subjects, matched for age and gender, as controls. In all the SSc patients and controls were performed: 12-lead standard electrocardiographic recordings and nailfold capillaroscopy. QTc interval and nailfold capillaries density were recorded in SSc patients and controls. In SSc were determined: antinuclear antibodies, anti-topoisomerase I, anti-centromere and anti-RNA polymerase III antibodies, too. Data are presented as mean ± standard deviation. Statistical analyses were performed using the Student’s t-test, ANOVA test, and the Pearson’s correlation. Differences were considered statistically significant at the value of p < 0.05.

Results: The values of QTc interval were prolonged in SSc group than in controls (p<0.01). These values of QTc interval increased with the severity of the nailfold capillaroscopic pattern, the differences having statistical significance (p<0.001). It was demonstrated a statistically significant negative correlation between the values of QTc intervals and nailfold capillaries density, this correlation being stronger with the increase of the severity of nailfold capillaroscopic pattern.

Conclusion: SSc patients present prolonged QTc interval, even they are without any cardiac symptoms, requiring the ambulatory 24-hour ECG monitoring in order to identify ventricular arrhythmias and initiate appropriate therapy

Keywords: Corrected QT interval; Nailfold capillaroscopy; Systemic sclerosis

Introduction

Systemic sclerosis (SSc) is a chronic disorder, characterized by autoimmunity, inflammation, functional and then structural abnormalities of micro vessels and, finally, widespread interstitial and vascular fibrosis involving skin and internal organs [1]. Based on clinical features and the presence of specific SSc-related autoantibodies, the following forms of SSc have been described: limited SSc (lcSSc), diffuse SSc (dcSSc) and SSc without skin involvement [2].

Cardiac involvement in SSc can be primary, direct consequence of this disease or secondary, associated with SSc pulmonary hypertension or renal crisis [3]. The clinical features of SSc heart involvement are highly variable, from silent forms to heart failure. Based on this fact, the prevalence of SSc cardiac involvement varies greatly, from 10% to 50%, depending on the diagnostic method used (clinical exam, electrocardiography, cardiac ultrasonography, cardiac magnetic resonance imaging) [4]. The rapid skin thickness progression is associated with higher cardiac involvement. Cardiac causes represent 20% to 36% of deaths associated with SSc. Several mechanisms are involved in SSc heart disease: microvascular alterations, myocardial inflammation, fibrosis and autonomic dysfunction [1,2,5].

Electrocardiographic abnormal findings are identified in 25-75% of SSc patients, being represented by: atrial and ventricular tachyarrhythmias, conduction abnormalities and bradyarrhythmias. They are considered to be an independent predictor of mortality [6]. In SSc, even without cardiac symptoms, QT and corrected QT (QTc) intervals appear prolonged, which can lead to life-threatening tachyarrhythmias [7].

QT interval, measured from the beginning of the QRS complex to the end of the T wave, represents the time required for all ventricular depolarization and repolarization processes to occur. It depends on many physiologic and pathologic factors, including heart rate, which plays a major role. Several methods have been used to correct the QT interval, all of which consider the heart rate, generating corrected QT (QTc) [8].

The aim of this study was to assess the QTc interval in SSc patients, and, on the other hand, the correlations between QTc and nailfold capillary findings in these patients.

Materials and Methods

Patients

This case control study was performed on a group of 22 patients with SSc without cardiac symptoms and 22 healthy subjects, matched for age and gender, as controls. All patients fulfilled the 2013 ACR/EULAR Classification Criteria for Systemic Sclerosis [9]. Exclusion criteria were represented by: overlap syndromes, overt cardiac diseases unrelated with SSc, tachyarrhythmias with heart rate higher than 90 beats/minute, pre-existing branch blocks, uncontrolled systemic hypertension, pulmonary hypertension, right ventricular dysfunction, diabetes mellitus, chronic kidney disease, current smokers, pregnancy or breastfeeding women, treatment with drugs which prolonged QT interval. All the patients gave their informed consent. The study was approved by the Ethics Committee of “Victor Babeș” University of Medicine and Pharmacy, Timişoara, Romania. This study respects the Declaration of Helsinki.

Methods

12-lead standard electrocardiographic recordings (recording speed of 25mm/sec, voltage 10 mm/mV) using BTL-08 SD3 equipment were performed in all patients and controls. QT interval was recorded in all patients and controls; then, using Bazzett’s formula, QTc interval was determined. Normal values of QTc interval were below 440 msec [8].

The density of nailfold capillaries/mm was determined by nailfold capillaroscopy (USB Digital Microscope, 2.0Mega Pixel Digital Camera). Before this procedure, the patients and controls took place in a room with a stable temperature of 20-22°C for at least 15 minutes, in order to avoid capillaries vasoconstriction, which can induce false positivity for avascular areas. The 2nd, 3rd, 4th and 5th fingers of both hands were examined. Giant capillaries, capillaries hemorrhages, avascular areas, ramified/bushy capillaries, and capillary architecture were the recorded nailfold capillaroscopic parameters. Patients with SSc may develop three capillaroscopic patterns defined as: early, active, and late [10]. Nailfold capillaries density/mm was the parameter used in the statistical analysis.

Antinuclear antibodies, anti-topoisomerase, anti-centromere, and anti-RNA polymerase III antibodies were determined using indirect immunofluorescence (HELMED).

Statistical analysis

Data are presented as mean±standard deviation. Statistical analyses were performed using the Student’s t-test, ANOVA test, and the Pearson’s correlation. Differences were considered statistically significant at the value of p<0.05.

Results

Baseline demographic data of SSc patients and controls are presented in the Table 1.

Table 1: Demographic data in SSc patients and controls.

irispublishers-openaccess-rheumatology-arthritis-research

Among the SSc patients, 19 had diffuse cutaneous SSc, whereas 3 patients had the limited form of the disease. Raynaud’s phenomenon was present in all cases.

Antinuclear antibodies were demonstrated in all patients. Anti-topoisomerase I antibodies were identified in 12 SSc patients, anti-RNA polymerase III antibodies in 7 SSc patients, whereas anticentromere antibodies in 3 patients.

The density of nailfold capillaries was reduced in SSc patients than in controls (p<0.001). The values of QTc interval were prolonged in SSc group than in controls, the difference being statistically significant (p<0.01) (Table 2).

Table 2: Demographic data in SSc patients and controls.

irispublishers-openaccess-rheumatology-arthritis-research

Table 3: Demographic data in SSc patients and controls.

irispublishers-openaccess-rheumatology-arthritis-research

Based on Cutolo’s capillaroscopic patterns [10], the studied SSc patients were classified as: early (6 patients), active (9 patients), and late (7 patients) capillaroscopic patterns. The data analysis revealed that the values of QTc interval increased with the severity of the capillaroscopic pattern, the differences having statistical significance (Table 3).

Statistical analysis highlighted the presence of statistically significant correlations between the values of QTc intervals and nailfold capillaries density and, on the other hand, with the mean length of SSc evolution (Table 4).

Table 4: Correlations between QTc values and demographic data.

irispublishers-openaccess-rheumatology-arthritis-research

The progression of SSc microangiopathy, proved by nailfold capillaroscopic pattern, was associated with prolongation of the QTc interval. If at the beginning of the SSc (revealed by early capillaroscopic pattern), the correlation between QTc interval and nailfold capillaries density did not exist, as the disease worsened (active and late capillaroscopic patterns), the negative correlations become more and more significant (Table 5).

Table 5: Correlations between QTc values and capillaroscopic patterns.

irispublishers-openaccess-rheumatology-arthritis-research

Regarding the QTc interval duration according to the SScrelated antibodies profile, it was found that the presence of RNApolymerase III antibodies was associated with the longest duration of the QTc interval (anti-centromere antibodies: 417.66±37.85 msec, anti-topoisomerase I antibodies: 449.91±40.97 msec, anti- RNA polymerase III antibodies: 480.71±43.43 msec, p<0.001). But it should be noted that 4 of the 7 patients with anti-RNA polymerase III antibodies had late capillaroscopic pattern.

Discussion

The present study performed on patients with SSc, showed that in these patients, the QTc interval was longer than in controls. Moreover, there was a negative correlation between the QTc interval values and the nailfold capillaries density. In other words, the progression of SSc microangiopathy, proved by nailfold capillaroscopic pattern, was associated with QTc interval prolongation. If at the beginning of the SSc (revealed by early capillaroscopic pattern), this correlation did not exist, as the disease worsened (active and late capillaroscopic patterns), the correlation become more and more significant (r = - 0.6963, p<0.001 for the patients with active capillaroscopic pattern, respective r = - 0.8432, p<0.001 for the patients with late capillaroscopic pattern).

SSc microangiopathy is present in all organs of these patients. Nailfold capillaroscopy has become a useful tool in staging of microcirculation involvement in SSc patients, offering details about the disease severity and degree of vascularization in patients with it [10,11]. Cutolo et al. defined three evolution patterns of microvascular involvement in SSc patients, named as: early (few giant capillaries, few capillary microhemorrhages, no evident loss of capillaries, and a relatively well-preserved capillary distribution), active (frequent giant capillaries, frequent capillary microhemorrhages, moderate loss of capillaries, absence of or mildly ramified capillaries with slight disorganization of the capillary architecture), and late (irregular enlargement of the capillaries, almost absent giant capillaries and microhemorrhages, severe loss of capillaries with extensive avascular areas, ramified/ bushy capillaries, and intense disorganization of the normal capillary array), having a role in assessing the appearance and progression of sclerodermic microangiopathy [12,13].

Pathogenesis of SSc heart disease is related to cardiac microangiopathy (without epicardial vessels involvement), which drives to ischemia, inflammation and fibrosis [3]. Recurrent myocardial ischemia, induced by microvascular dysfunction (myocardial Raynaud’s phenomenon), generates patchy myocardial fibrosis. This myocardial fibrosis is identified in 50% to 70% of SSc patients, generating prolongation the time required for all ventricular depolarization and repolarization processes to occur. The electrocardiographic sign is represented by the prolonged QTc interval [14, 15]. But beside myocardial fibrosis, autonomic dysfunction contributes to QTc prolongation, too [6].

Wei et al. showed that the QTc interval prolongation represented an independent risk factor for cardiac mortality and sudden death [16]. Arrhythmias may be associated with poor outcome and represent 6% of the overall causes of death in SSc patients [17]. According to Vacca and el, cardiac arrhythmias are identified in 25- 75% of SSc patients [7].

Prolonged QTc interval was identified in SSc patients even without clinical signs of miocardial involvement [18].

Studying 72 SSc patients and 64 controls, Morelli et al. showed that the SSc patients presented significantly longer QTc interval than the controls (p = 0.0016) [19]. Thirty-eight patients with SSc (19 patients with dcSSc and 19 with the lcSSc) and 17 healthy controls were studied by Sgreccia et al. The authors identified that the SSc patients had increased QTc interval, QT dispersion, and QTc dispersion [20]. Bellando-Randone et al. identified that the SSc patients presented prolonged QTc interval, this situation being associated with increased risk of sudden deaths [21]. In the study performed by Massie et al., prolonged QTc interval was common in SSc patients, being associated with anti-RNA polymerase III antibodies, longer disease duration, and greater disease severity [22]. In the present study, the patients with anti-RNA polymerase III antibodies presented the longest duration of QTc interval; but 57.14% of them had late capillaroscopic pattern. Rosato et al. studying twenty SSc patients, showed that the values of QTc intervals were significantly increased in SSc patients than controls (447msec vs 386msec) (p<0.0001). The authors revealed that the values of the QT intervals increased with the severity of the nailfold capillaroscopic pattern: early pattern: 425 msec (421-454), active pattern 437 msec (416-467), and late pattern 471 msec (445-566) (p<0.01). No correlations were found between the values of QTc interval and SSc subset or duration, but the presence of digital ulcers and high modified Rodnan total skin score were correlated with the values of this interval [23]. Similar findings regarding to the values of QTc intervals and nailfold capillaroscopic pattern were identified in the present study. One study showed that in dcSSc patients, QTc interval was significantly prolonged than in lcSSc patients [24], but it has not been confirmed by other studies. Another study performed on 65 SSc patients and 63 control subjects showed that the QTc intervals were significantly higher in the former (p < 0.01), without any difference between patients with dcSSc and lcSSc [15].

QTc interval prolongation in SSc patients with late capillaroscopic pattern is associated with a high risk of developing life-threatening ventricular arrhythmias [23]. If the prolonged QTc interval is demonstrated, then the ambulatory 24-hour ECG monitoring is required to identify ventricular arrhythmias and initiate appropriate therapy.

The relatively small number of SSc patients is one of the limits of this study. Large-scale prospective studies are required to identify the risk factors of life-threatening ventricular arrhythmias in SSc patients with prolonged QTc interval.

Conclusion

SSc patients present prolonged QTc interval, even they are without any cardiac symptoms. In order to identify the SSc heart disease from its earliest stages, it is advisable to perform the electrocardiogram once a year, especially in patients with significant nailfold capillaries findings.

To read more about this article...Open access Journal of Rheumatology & Arthritis Research

Please follow the URL to access more information about this article

To know more about our Journals...Iris Publishers

To know about Open Access Publishers

Iris Publishers-Open access Journal of Hydrology & Meteorology | Influence of Community Resilience to Flood Risk and Coping Strategies in Bayelsa State, Southern Nigeria

  Authored by  Nwankwoala HO *, Abstract This study is aimed at assessing the influence of community resilience to flood risk and coping str...