Fine Arts
Measuring shear wave speed in tendons using low-cost accelerometers on a flexible PCB with an Arduino microcontroller
Authors: Eli F Smith, Christopher Dillon, Matthew S Allen. Mentors: Matt Allen. Insitution: Brigham Young University. Background: Shear wave tensiometry offers a method to measure in vivo tendon tension, crucial for inferring applied loads on tendons. However, existing equipment for this purpose is costly and lacks mobility, limiting the ability to study a larger cohort of subjects engaged in various physical activities. Goal: This research aimed to assess the viability of utilizing low-cost digital accelerometers in conjunction with an Arduino-based microcontroller for shear wave tensiometry. Approach: This work employs surface-mounted accelerometers on a custom flexible printed circuit board (PCB), so that even spacing can be maintained between the accelerometers without interfering in data collection. To test the system and verify its ability to acquire measurements at a high enough rate, the PCB was connected to a shaker driven with a known sinusoidal signal. The flexible PCB was held in place on the Achilles tendon using athletic tape with a tapper placed on the tendon to send a shear wave through the tendon. Results: The results obtained to date compare the accuracy of the proposes system relative to the current system, which uses instrument grade accelerometers. Results obtained to date on the tendon reveal the degree to which the PCB interferes with the measurements, and suggest possible modifications to improve future designs. Conclusion: To attain valid shear wave tensiometry data, further iterations of the flexible PCB design are needed. Moreover, employing more powerful microcontrollers capable of accommodating the requisite sample rates is necessary for an improved system.
Victor LaValle’s Destroyer: Comic as a New Medium for Black Narratives
Authors: Shauri Thacker. Mentors: Nicole Dib. Insitution: Southern Utah University. The representation of Black narratives within the medium of comics and graphic novels has been and continues to be sparse compared to the focus on white characters and stories. In recent years, however, a new paradigm of Black representation has been appearing in comics and comics studies. Emerging from Black Lives Matter movement discourse about police brutality and taking a new angle on the “organic black protest tradition” is Victor LaValle and Dietrich Smith’s 2017 graphic novel Destroyer. Their visual narrative follows the last descendant of Frankenstein—Dr. Josephine Baker—and her endeavors for justice after the murder of her son Akai as her plotline weaves with the original Frankenstein creature’s quest to subdue humanity (Rickford, 35). In this paper, I will perform a close reading of this work using the theoretical lens of critical race theory and its intersection with feminist theory. My reading will prove that LaValle and Smith’s comic—though it does not have the intent of treating Black women as a “unitary and monolithic entity”—increases a reader’s focus on mother and doctor Josephine Baker in order to portray her rage and grief over Akai’s death at the hands of a police offer (Nash, 8). This analysis of Dr. Baker’s character, combined with LaValle’s usage of allusions and intertextuality, subverts the stereotypically white comic narrative by portraying the lived reality and precarity of many Black individuals within the United States. Through this reading, I will demonstrate that the medium of comics allows for a multifaceted depiction of Black narratives and a new form of literary activism.
Theology and Prosperity of Women Within the Latter-Day Saint Church: Exploring Utah’s Religiosity Effects On Feminism
Authors: Alexis West Salinas. Mentors: Johnathan Chidester. Insitution: Southern Utah University. There is a growing need to further understand the psychological and physical effects of religiosity among female-identifying members within the LDS church. According to recent studies, as of June 2023, Utah has a Mormon population of 68.55%. Within these statistics, about a fifth of LDS members reported that they have or are currently taking medication for depression. The article also states that 27% of LDS women have depression compared to the 14.5% of LDS men. The church has an infamous reputation with the demonization of mental illness, especially with women. Given the history of society’s treatment of women, women are at an increased risk of suffering within the LDS church. It is important to have the best interests of LDS women in discussing the effects religiosity has on female members within Utah. This paper will analyze the well-being among women of the LDS church as well as assess the specific effects religiosity within Utah may produce surrounding the topics of purity culture, societal expectations, job opportunities, and relationships. This paper will also touch on topics relating to Utah education, LGBTQ+ women within the church, as well as compare and contrast gender roles and sociology. Composing a formulated understanding of these relationships paves the pathway forward in implementing accommodating tools for women within the LDS church in mental health and spirituality.
Reducing the Cytotoxicity of Polyethylene Glycol Diacrylate Microfludic Devices Using an Isopropyl Alcohol Washing Method
Authors: Parker Johns, Chandler Warr, Gregory P Nordin, William G Pitt. Mentors: William G Pitt. Insitution: Brigham Young University. Polyethylene glycol diacrylate (PEGDA) microfluidic devices have gained prominence in various biomedical and analytical applications due to their exceptional material properties and compatibility with cell culture systems. However, the presence of residual uncrosslinked PEGDA monomers and photoinitiators within these devices can lead to cytotoxicity concerns, potentially compromising cell viability and experimental results. In this study, we present an innovative approach to reduce cytotoxicity associated with PEGDA microfluidic devices by implementing an isopropanol (IPA) washing method.Our investigation involves thoroughly characterizing the cytotoxicity of untreated PEGDA microfluidic devices and comparing it with devices subjected to the IPA washing procedure. We systematically assess cytotoxicity using cell viability assays and cell proliferation studies to quantify the impact of residual cytotoxic compounds on cells cultured within the microfluidic channels. Our results demonstrate that IPA washing significantly reduces the cytotoxic effects of PEGDA microfluidic devices, leading to improved cell viability and overall biocompatibility.Furthermore, we elucidate the mechanisms behind the reduction in cytotoxicity, shedding light on the role of IPA in effectively removing unreacted PEGDA and photoinitiators. This study provides valuable insights into the optimization of PEGDA microfluidic device fabrication processes, enhancing their biocompatibility and usability for various biological and biomedical applications.In summary, our research highlights the importance of addressing cytotoxicity concerns associated with PEGDA microfluidic devices and offers a practical solution through the implementation of an IPA washing method, ultimately expanding the potential of these devices in diverse scientific and clinical applications.
Construction Techniques in Ancient Fremont Pithouses
Authors: Ellie Martin. Mentors: Mike Searcy. Insitution: Brigham Young University. As part of the 2023 Hinckley Mounds excavation, BYU Field School participants excavated a sizable amount of beam-impressed adobe and burnt wooden beams from the partially excavated pit house. This research proposal will focus on these two types of artifacts and what can be learned about the Fremont people through their study. Specifically, in this research project I will attempt to answer the question of what specific types of wood and adobe were used, and how they were used together to build the Fremont pithouse. To do this, the charcoal will be sent in for wood testing to get dates and tree type. I will also test the beam-impressed adobe to understand the type of clay used to make the hardened adobe. Finally, I will study the beam impressions in the adobe to find the average diameter of the beams used in the pit houses, any outliers in the diameters, and analyze any visible angles on corner pieces to put together a picture of how the beams and adobe came together to form the skeleton of the pit house structure. This research has the potential to reveal much about the Fremont people that lived at Hinckley Mounds, and help us to better understand their living conditions. The research that I conduct will potentially generate data about the age and type of wood used in their living structures, the diameter of the wooden beams they used, and the angles of the corners of the structure. At the conclusion of this research we aim to gain a better understanding of how the Fremont people constructed their dwellings, from the materials to their methodologies. I anticipate finding that the trees they used were locally sourced and date to a similar period of the other dates that we have from the Hinckley Mounds site, between 700–1300 CE. The pithouse will likely prove to be sub-rectangular, similar to Structure 1 at the Hinckley Mounds site, although any angles that can be found in the adobe impressions could give more precise data. All of this new data may serve to provide more insight into how the Fremont lived and chose to operate in their society.
Trace Metal Concentrations of Various Land Use Types Surrounding Utah Lake
Authors: Alex Montgomery, Mason Gordon. Mentors: Eddy Cadet. Insitution: Utah Valley University. TRACE METAL CONCENTRATIONS OF VARIOUS LAND USE TYPES SURROUNDING UTAH LAKEMason Gordon, Alex MontgomeryKeywords: Trace metals, Soil, Land use, Utah LakeUtah Lake has a history of anthropogenic impacts that have resulted in the accumulation of trace metals (TMs) in the sediments of this region. Previous studies have evaluated the water and saturated soils, but have not provided a complete picture of the human impact on upland soils. Some pollutants may be contained in unsaturated soils and never enter the water due to the inherent soil characteristics and chemical properties of the TMs. Elevated concentrations of TMs in the environment pose hazards to the ecosystem and local residents. These impacts can be better understood by evaluating TMs in unsaturated upland soils. In this study a comparison of TMs in saturated wetland and unsaturated upland sediments, as related to anthropogenic sources, was completed. This study analyzed the types and concentrations of TMs to understand their mobility throughout the ecosystem. 52 core samples were collected from the saturated and unsaturated soils in eight sites. These sites represent recently developed areas (New Dev), more established areas (Mid Dev- those that have been developed over five years), recreational areas, mining areas, industrial areas, agricultural areas, and a wastewater treatment plant site. The soil samples were dried, ground, sieved, acid digested, and analyzed in the ICP-OES for TM (As, Pb, Cu, Cr, Cd, and Zn) content. Preliminary results show that TM concentrations in upland sediments were higher than those in wetland sediments. Cr levels in industry and Mid Dev are 43.0 ppm and 47 ppm, respectively. Saturated sediments at the same sites revealed Cr levels of 23 ppm and 21 ppm, respectively. This indicates that TMs are contained in unsaturated sediments. Of the observed land use types, Mid Dev is the most impacted, having the highest elevations of TM levels on average. Cd concentrations exceeded background levels in both New Dev (.97 ppm) and Mid Dev (.83 ppm). This is indicative of anthropogenic impact, as developed areas have higher TM content. The statements of this study will provide information to regulatory authorities in order to create policy to improve human health.
Another Look at Underlying Mortality Model Used in Life Insurance Industry
Authors: Benjamin Furniss, Britton Borget, John Sanders. Mentors: Patrick Ling. Insitution: Utah Valley University. Mortality model is the underlying model used by life actuaries to price life policies, set reserve amounts, and compute policy values. A mortality model investigates how mortality rates evolve over time. Current insurance law in many states (including Utah) suggest the use of Scale AA (or a similar model) in projecting future mortality rates, which is a special case of autoregression time series model. This model is flawed because it is built on the assumption that (1) there is no ARCH effect in the central death rates data, and (2) there is no unit root in the time series of mortality index. These assumptions are questionable. No wonder why state insurance laws (including Utah state insurance law) are recently revised in recognition of discrepancy between model predicted mortality rates and actual mortality rates. Recent published literatures indicate that the second assumption is questionable, as some statistical tests suggest that there is some near unit root in the mortality model. In this talk we want to argue that ARCH effect is present in the mortality data, so there is need to adopt a time series model that incorporates heteroskedasticity in the mortality data. We will later propose a GARCH model for better predicting future mortality rates – a key task life actuaries conduct, for it is important for life actuaries to predict what will happen over the next few decades of policy term.
Artificial black holes: are they a threat to humanity?
Authors: Tate Thomas. Mentors: Alexander M Panin. Insitution: Utah Valley University. We wanted to see if accidentally creating mini black holes in high energy particle collisions posed a real threat to humanity. To do this, we calculated some properties of such a black hole, such as its life span, radius, density, and minimum energy required. We found that it is unlikely to exist, let alone destroy the planet. Furthermore, we calculated what would happen if it were to exist, finding that it would move through the Earth with little resistance and with a small amount of Earth matter absorbed. Depending on initial velocity, the black either quickly escapes Earth or would settle orbiting it with the orbit part of which passes via Earth. It is interesting that in a simplified model of Earth as of a sphere of uniform density, the inner part of the orbit of black hole is also elliptical (as the outer is) but not Keplerian - with Earth center not at the focus but at the center of another ellipse. In the case of small initial velocity when entire orbit is inside Earth, the period of such inner orbit is constant regardless of birth location and initial velocity of black hole. The goal of this presentation is to discuss the results of our calculations and to explore potential applications to our understanding of interaction of mini black holes with ordinary atomic matter.
HEIGHTS IN THE abc CONJECTURE. AN UNDERGRADUATE APPROACH.
Authors: Brantson Yeaman. Mentors: Machiel van Frankenhuijsen. Insitution: Utah Valley University. There has been considerable curiosity at the graduate and postgraduate level in regards to heights, that is, heights in their relation to Diophantine geometry. One application of heights is in the $abc$ conjecture, which remains highly mysterious. Often, the only height undergraduates encounter is the traditional absolute value. This talk seeks to define the height for use in investigating the $abc$ conjecture and connect it at a level that undergraduates with little experience with number theory may approach. It will introduce the idea of a $p$-adic norm of a number, a projective point, and a view that lends itself to both a simple idea of distance, and yet has an analogue in the Hamiltonian numbers.
Asexual Aviators: Transcriptome Profile of the Life Stages of a Parthenogenic Mayfly
Authors: Avery Larsen, Heath Ogden. Mentors: Heath Ogden. Insitution: Utah Valley University. Mayflies, also known as Ephemeroptera, are members of the anciently derived infraclass known as Paleoptera, the first group of insects that evolved the ability to fly (Gillott, 2005). Distinct developmental characteristics of the life stages of the mayfly are the reason that ephemeroptera are of particular interest. The life stages are; the egg, nymphs, subimago, and imago. The aquatic nymph stage does not have wings but instead has leaf-shaped gills that can be used to help propel the insect through its environment (Eastham, 1936). Centroptilum triangulifer will be used for RNA extractions to study the development of both wings and gills. Objectives for this research are 1). Elucidate, describe, capture, and record distinguishing characteristics of the different instars of Centroptilum triangulifer. 2) Identify key instars integral to gill and wing development in Centroptilum triangulifer. 3) Perform 10 RNA extractions. 4) Use Qubit technology to ensure ≥ 20 ng/μL RNA concentration per ≥ 10 μL. 5) samples will be packaged in thermo-stable shipping boxes and sent to Novogene where samples will be tested once more for quality control. 7) Once the quality is confirmed, samples will be processed using poly-A enrichment and then sequenced using high-throughput Illumina sequencing. Data will be tested for quality control and then sent to Ogden labs. 8). Using an in-house bioinformatics workflow, RNA data will be checked for quality, trimmed, and aligned, before RNA data is first trimmed, and then aligned to Ogden labs transcriptome reference genome. After alignment, the number of reads per gene, or hit counts, are calculated and compared. 9) RNA results of different instars will be compared to each other as well as NCBI databases using the Basic Local Alignment Search Tool.
Exploring the Differences Among Attachment Styles with Cognitive Appraisal and Emotional Suppression
Authors: Rebekah Hakala, Moroni Black. Mentors: Todd Spencer. Insitution: Utah Valley University. Over the years countless studies have delved into the theory of attachment due to its influence in the field of child development. Attachment theory is the idea that the sensitivity of parents or caregivers to a child’s bids for attention affect how the child will bond in relationships (Bretherton, 1992; Wilson-Ali et al., 2019). The style of attachment that a child has can affect them long-term and may influence other relational and developmental characteristics (Kurth, 2013). Due to the influence of attachment, our study investigates the relationship it has with cognitive reappraisal. Cognitive reappraisal is the ability to regulate one’s emotional states and the idea that a change in these thoughts is necessary to change negative emotions (Troy et al., 2017; Clark, 2022). With its ability to process and regulate emotion, cognitive reappraisal is another influential characteristic to the human psyche. Our results and analysis of the relationship between attachment theory and cognitive reappraisal come from a quantitative survey.The purpose of the present study is to examine the relationship among attachment styles and Cognitive Reappraisal. Our sample consists of 411 married individuals. Participants completed The Relationship Questionnaire (RQ; Bartholomew & Horowitz, 1991) and Emotion Regulation Questionnaire (ERQ; Gross & John, 2003). Results of the one way Anova were significant F (3,407) = 3.36, p=<.01. There was a significant difference in levels of cognitive reappraisal among attachment styles. Secure attachment (M=29.95, SD = 6.41), fearful attachment (M=27.27, SD = 5.92), preoccupied (M=27.99, SD = 7.27), and dismissing (M = 28.78, SD = 7.54). Results of the Bonferroni Post-hoc analysis indicated a significantly higher levels of cognitive reappraisal than attachment styles (p.<.001). Results provide empirical support that secure attachment styles tend to be beneficial for cognitive reappraisal.
MRSA induced biofilm clearance by bacteriophage and antibiotic.
Authors: Rainey Hughes, Avalon Marker, Elizabeth Bouwhuis, Yeshaswini Dudde, Bryan Dopp, Scot Carington, Jared Nelson. Mentors: Daniel Clark. Insitution: Weber State University. Antibiotic resistance is a pressing concern within the medical community as bacteria's resistance to antibiotics is escalating alongside the increased usage of antibiotics. According to the CDC, there are close to 2.8 million antibiotic resistant infections every year, with about 35,000 of them resulting in death. This issue has prompted antibiotic stewardship programs in clinics and hospitals to avoid adding to the list of resistant bacteria. Staphylococcus aureus, including the formidable methicillin-resistant S. aureus (MRSA) strain, poses a grave threat due to its antibiotic resistance. The challenges stemming from this resistance become even more formidable when these infecting bacteria assemble into biofilms. Biofilms are robust, adhesive layers composed of bacteria and their extracellular matrices of polysaccharides, proteins, and DNA. In clinical environments like hospitals, biofilms frequently develop on medical devices such as stents, catheters, and IV lines, as well as on metal and plastic surfaces of medical equipment. These biofilms exacerbate antibiotic treatments due to incomplete eradication; the most resilient bacteria persist after exposure. There is evidence indicating that bacteriophages, which are viruses that will a target particular species or strain of bacteria, have the ability to encode depolymerases. These depolymerases can identify biofilms, adhere to them, and subsequently break down extracellular polymeric substances. Furthermore, bacteriophages can produce lysins, which induce bacterial cell death through cellular lysis. These characteristics can potentially render the bacteria more susceptible to antibiotics. The use of bacteriophages can also be beneficial when it comes to the concern of opportunistic infections. Due to its selectivity to specific bacteria, it can attack the target hosts and leave the natural flora intact.In our research, we have induced biofilms in our bioreactor. With these biofilms we have been able to test different concentrations of multiple antibiotics, including Vancomycin, Oxacillin, and Carbenicillin in combination with phage K at different concentrations. Our research is aimed at showing a synergistic relationship between phage K and antibiotics, that will allow a subinhibitory concentration of both, in combination, to induce a complete kill and clearance. We have measured this by evaluating bacterial growth via absorbance measurements at 600nm in a Tecan plate reader. We have also measured biofilm clearance using the plate reader and measuring fluorescence at 630nm with a biofilm tablet assay. It was found that a subinhibitory concentration of antibiotic alone did not induce a complete kill and clearing, and that a subinhibitory concentration of phage alone did not induce a complete kill and clearing. However, once these concentrations were used in combination with each other, the complete clearing and killing of MRSA was achieved, and furthermore, was achieved with the antibiotic that the staphylococcus aureus is resistant to. Leading us to believe that we have found a renewed use for a currently ineffective defense mechanism.
Development of the midwestern blot technique for elucidation of pigment-binding proteins
Authors: Tessa C Black, Craig D Thulin. Mentors: Craig Thulin. Insitution: Utah Valley University. Monarch butterflies (Danaus plexippus) are a distinctive and beloved species due to their unique wing coloration. In 2022, UVU student Kyri Forman and Dr. Craig Thulin identified seven pigments in monarch butterfly wings, three of which have not yet been identified in any other organism. The novel pigments are modified versions of xanthommatin, and their discovery implies the existence of presently unknown enzymes which catalyze the pigments’ biosynthesis. The western blot and its modified technique, the far western blot, are useful tools for identifying protein-protein interactions using antibodies and bait-proteins, respectively. To identify xanthommatin-binding proteins present in monarch butterfly pupae, we are developing a new blotting technique which we are calling the midwestern blot. This technique uses pigment molecules to identify the presence of pigment-binding proteins within a mixture. The midwestern blot technique will be validated is being validated using cytochrome C, hemoglobin, and myoglobin, three heme-binding proteins of known molecular weight and structure. Once validated, we will use the midwestern blot to identify xanthommatin-binding proteins extracted from monarch butterfly pupae. The midwestern blot will help promote future investigations into pigment-binding proteins, including the enzyme responsible for the novel pigments identified in monarch butterfly wings.
Efficiently and Accurately Simulating Coupled Nonlinear Schrödinger Equations with Exponential Time Differencing and Fourier Spectral Methods
Authors: Nate Lovett. Mentors: Harish Bhatt. Insitution: Utah Valley University. Coupled nonlinear Schrödinger equations (CNLSEs) are an extension of the nonlinear Schrödinger equation (NLSE) that applies to multiple interacting wave systems. They occur naturally in many physical systems, including nonlinear optics, multi-component Bose-Einstein condensates, and shallow water waves. Solitons, which are self-contained, localized wave packets that preserve their shape and speed during propagation, are a significant application of CNLSEs. Solitons are prevalent in nonlinear systems and play a critical role in long-distance information transmission in telecommunications. Despite their widespread use in various fields, solving CNLSEs analytically is challenging, and numerical approximations are necessary. However, solving CNLSEs numerically is a difficult task because of their high nonlinearity.To overcome this challenge, in this presentation, we will introduce, analyze, and implement an established fourth-order Exponential Time Differencing scheme in combination with the Fourier spectral method for simulating one-dimensional CNLSEs. In order to check the performance of this method in terms of accuracy, efficiency, and stability, we will present simulation results on CNLSEs. Our results will consider single, two, and four soliton interactions for homogeneous Neumann, homogeneous Dirichlet, and periodic boundary conditions. The numerical results will show that the proposed method is able to preserve energy and mass for a long time simulation in soliton interactions, as well as preserve the expected order of convergence for the proposed method.
Power Scaling a Nd:YVO4 Laser for Use as the Optical Drive of a Long Wave Infrared Optical Parametric Oscillator
Authors: Alexander Gibb. Mentors: York Young. Insitution: Utah Valley University. Our research group’s long term goal is to build a student-constructed nonlinear optical system to create a tunable source of longwave infrared (LWIR) photons for probing molecules for early detection of disease in human tissue. I previously developed a diode laser system as the optical pump for a 5 Watt Nd:YVO4 laser which was reported at UCUR in Feb 2023. However, recent nonlinear optical calculations for our optical parametric oscillator (OPO) show that we will need the average laser power from that Nd:YVO4 laser to be in the 7-8 Watt range. My work to power scale the Nd:YVO4 laser is described in this presentation.
A Gyroscopic System for Magnified Coherent Diffraction Imaging
Authors: Tyler Daynes, Jair Gonzalez, Jeremy Tait, Josh Jumper, Ellie Purcell, Tyler O'Loughlin, Vern Hart. Mentors: Vern Hart. Insitution: Utah Valley University. Coherent diffractive imaging is a common method for resolving small objects such as cells in order to determine their morphology. In it's essence it takes the diffraction pattern of laser light attenuating through an object and computationally reconstructs them back into the image that the light goes through. On the downside resolution of these diffraction patterns resulting from CCD size and sensitivity can be poor resulting into a less than optimal reconstruction. We have set out to build a system that can magnify this diffraction pattern without distorting the original image of the pattern. We have done this by building our own rastering system that can magnify light on the far edges of our beam. It does this by taking images with a lens attached to the rastering camera. This also has a problem because when the camera moves our detected light quickly falls off the screen of the detector. To fix this we added a gyroscopic system to our camera and lens so that the incident light may hit the system asmuthally at every point on the raster. This in turn will provide a higher quality diffraction pattern for reconstructing in CDI.
Using Ecological Niche Modeling to better understand Pediomelum aromaticum, a threatened Utah plant
Authors: Ian Eggleston, Ashley N Egan. Mentors: Ashley N Egan. Insitution: Utah Valley University. Ecological Niche Modeling (ENM) is a very useful technique that gives us insight into a species’ present and possible future ranges, habitats, and niches. ENM has applications within conservational biology as models can be used to understand the extent to which climate change may impact a species. Additionally, ENM can be useful for prospecting for and propagation of rare plant species. This project will use ENM to create predictive range models for a rare plant species, Pediomelum aromaticum, with the goal of defining an ecological niche, determining impact of climate change, and general conservation of P. aromaticum. Here, we will compare 19 bioclimatic variables using correlation analysis and ecological niche modeling to determine which are the most impactful on the range of P. aromaticum. Additionally, ENMs will be created using the MAXENT algorithm from historical and predicted future climate data. By comparing these models, we can hypothesize as to how climate change may impact P. aromaticum. Finally, models will be compared between predicted climate futures defined as shared socio-economic pathways or SSP. SSP models estimate the impacts of human interactions within the scope of environment, governments, and each other to estimate the impact of varying sets of human interactions as defined by modified global behaviors within humanity and the impacts on global climate change. Different ENM models will be created with different SSP climate models so that we can understand how actions taken by humanity right now may impact the critically imperiled species P. aromaticum.
Effect of a supernova explosion on the mechaincal and thermal stability of orbiting planets
Authors: Alexander Panin, Benjamin Miera. Mentors: Alexander Panin. Insitution: Utah Valley University. Recent searches for extrasolar planets have brought a surprising discovery – almost any star seems to have a planetary system around it. We know that massive stars end their lives in a violent supernova explosion, during which an extremely large amount of energy (~3x10^46 J) is released from the star in a very short time. In this presentation, we analyze the effect of this explosion – primarily the impact of the neutrino flash, the gamma ray flash, and the expanding plasma shell - on the mechanical and thermal stability of an orbiting planet. Our calculations show that a planet's orbit can be significantly disrupted by the momentum of the exploding star shell (depending on the planet's mass and proximity to the host star), but the radiation pressure from the explosion has a much weaker effect. If a star loses too much mass to the expanding shell, any previously stable orbit will become unstable, causing the planets to escape. Additionally, we found that the gamma ray flash and the plasma shell incident on the planet can cause significant heating, while the neutrino flash would have virtually no effect. Also, if a star’s collapse is asymmetric then the star itself can leave the planetary system due to the momentum of the asymmetric neutrino radiation. The sequence of events during a supernova explosion and how they influence such a planet is discussed in the presentation.
Transdermal antiseptic products as a method to decrease skin bioburden prior to surgery
Authors: Kiersten Gardner, Hannah Duffy, Abbey Blair, Nicholas Ashton, Porter Stulce, Dustin Williams. Mentors: Dustin Williams. Insitution: University of Utah. Transdermal antiseptic products as a method to decrease skin bioburden prior to surgeryKiersten Gardner(1,2), Hannah Duffy(1,2), Abbey Blair(2), Nicholas Ashton(2), Porter Stulce1(2), Dustin Williams(1,2,3,4)1 Department of Biomedical Engineering, University of Utah, Salt Lake City, UT2 Department of Orthopaedics, University of Utah, Salt Lake City, UT3 Department of Pathology, University of Utah, Salt Lake City, UT4 Department of Physical Medicine and Rehabilitation, Uniformed Services University, Bethesda MDIntroduction: Preoperative skin preparation (PSP) kits are used before surgery to prevent surgical site infection (SSI). These kits consist of alternating scrubs of alcohol and Chlorohexidine Gluconate (CHG) or Povidone Iodine (PVP-I). Transdermal antiseptic products like Ioban, Tegaderm, and Surgiclear are also used clinically to eradicate skin bacteria by releasing antiseptic over time. Despite these precautions, infections often occur, usually stemming from a patient’s endogenous skin flora. The bacteria causing SSI reside deep in dermal sweat glands and hair follicles, untouched by traditional PSP. To eliminate these bacteria, antiseptics must diffuse deeper into the skin at concentrations above the minimum inhibitory concentration (MIC). We screened FDA-approved topical antiseptic products using a modified Kirby Bauer Assay to assess bacterial kill over time. We hypothesized that extended use of topical antiseptic products prior to surgery would kill deep-dwelling skin bacteria. We tested the extended use of these products on pig backs in conjunction with a CHG PSP to evaluate remaining bioburden. Methods:We determined the minimum inhibitory concentrations (MIC) for CHG and PVP-I against three common skin bacterial strains. We then took 6 mm biopsy punches of Tegaderm, Surgiclear, and Ioban and placed them on fresh bacterial lawns daily for 3 days. We measured the zones of inhibition (ZOI). Lastly, we applied the products to the backs of 4 Yorkshire pigs (n=4 products per test). After 48 h, we removed the products, performed a PSP- CHG scrub, and excised skin sections underneath the products. We homogenized the skin and quantified colony forming units (CFU)/g tissue. We analyzed the outcomes statistically using a mixed effects linear regression to determine significance.Results:The average MIC values for CHG and PVP-I were 2 and 2,000 ug/mL for Staphylococcus aureus, respectively. The average 24 h ZOIs for S. aureus were 7.9, 22.4, and 10.1 mm for Ioban, Tegaderm, and Surgiclear, respectively. The average log10 reduction for the CHG PSP, Ioban, Tegaderm, and Sugiclear were 1.87 ± 0.232, 1.65 ± 0.192, 1.69 ± 0.222, and 1.77 ± 0.316 CFU/cm2, respectively. The p values between the CHG PSP and Ioban, Tegarderm, and Surgiclear were 0.491, 0.572, and 0.746, respectively. Discussion:The benchtop data indicated adequate antiseptic diffusion and kill. In the porcine model, however; the presence of any of the products resulted in a statistically insignificant log reduction. Extended use of FDA-approved transdermal antiseptic products does not appear to decrease the skin’s bioburden. It appears that the products are not releasing high enough concentrations of the antiseptic to kill bacteria.
Unlocking the Mysteries of Glacial Watersheds: Tracing the Path of Water Chemistry Over Time and Space
Authors: Miaja Coombs, Greg Carling. Mentors: Greg Carling. Insitution: Brigham Young University. Our research delves into the intricate relationship between glaciers and Alaskan rivers. We embarked on an extensive study across Southcentral and Interior Alaska, spanning various mountain ranges, to examine how glaciers affect the water chemistry in these regions. From small cirque glaciers to expansive valley glaciers and sediment-covered glaciers, we collected data from river sites extending from glacier termini to the ocean or larger river systems. Over a span of two years, our weekly and monthly samples shed light on the complex interplay of elements, isotopes, and seasonal variations in water sources within these proglacial rivers and streams. Our findings reveal the dynamic nature of glacier-influenced watersheds, especially in the context of a changing climate.
Ancestral Puebloan Ceramics Technology and Vessel Properties at Alkali Ridge Site 13
Authors: Carolina Corrales. Mentors: James R. Allison. Insitution: Brigham Young University. This research analyzes ceramics found at the Alkali Ridge Site 13 in southeast Utah. The information generated with this research will allow us to know more about the technological choices of the Early Pueblo I people who lived at this location in the late A.D. 700s. The methodology will examine rim sherds through refiring and porosity tests. Refiring small sections of the sherds will provide initial information about the chemical composition of the clay used to create the vessels. The porosity tests should help determine the pieces' physical properties linked to different technologies. The combination of all these data will show differences in raw materials and their impact on the constitution of the vessels. The database obtained will allow us to statistically compare information from red, grey, and white wares, identifying patterns in size, shape, kind of material, and the technology used for each type.
Re-Membering and the Role of Community In Exorcism In Toni Morrison’s Beloved
Authors: Jen Hansen. Mentors: Nicole Dib. Insitution: Southern Utah University. Toni Morrison’s contribution to Gothic horror with Beloved expands the tradition into the history of human enslavement, specifically in the context of American history. She reconceptualized ‘remembering’ as both the conscious awareness of the past and the literal reassembling of members of the body, and by extension the family and the home. Morrison coined the term “rememory” in reference to the intentional act of recollection performed by an entire community. The characters of the novel are haunted in several ways and each haunting is only exorcized or overcome through acts of communion, or rememory — many of which are symbolically religious even if not sanctioned as such. The main character is a woman named Sethe, who is haunted by the ghost of the daughter she murdered rather than allow to be taken as a slave. The ghost of Beloved represents the return of the repressed trauma of her death and the connection to Sethe’s previous life in captivity. In order to exorcize Beloved’s ghost and free themselves from her oppressive presence, Sethe and her living daughter, Denver, must re-member the broken family structure within their home, and rememory the traumatic past with the support of the community in order to heal. American Gothic traditions in literature have long been used to reflect on anxiety, discrimination, and disempowerment related to the Other. In this novel, Morrison uses that tradition to give shape to the culturally specific legacy of slavery in America. In this presentation I examine the novel’s Gothic elements and the ways the community plays the part of the exorcist as an essential advocate for the physical and emotional survival of Sethe and Denver.
Quantifying Approximation Errors in the Flash Thermal Diffusivity Measurement Technique Using High Fidelity Simulations
Authors: Tage T Burnett, Jakob G Bates, Christopher R Dillon, Matthew R Jones. Mentors: Christopher R Dillon. Insitution: Brigham Young University. In recent years, modeling and simulation have become more prominent in solving heat transfer problems. The accuracy and predictive power of heat transfer simulations is limited by the quality of the thermal properties used within the model. Thus, one method for improving computational accuracy is measuring thermal properties more precisely. Additionally, increased precision of thermal properties benefits other aspects of engineering including design and analysis. This research focuses on quantifying approximation errors in the widely adopted flash method for measuring thermal diffusivity. The flash method leverages several approximations to make it simple and easy to use; however, these approximations do not reflect reality and introduce measurement errors. Understanding these errors is critical for developing high-precision thermal diffusivity measurement techniques.In the flash method, the top surface of a small, cylindrical disc of material is subjected to a short pulse from a laser or flash lamp and the time-dependent temperature at the opposite surface is recorded. The thermal diffusivity is calculated using those temperature measurements in combination with a mathematical model. The accuracy of the flash method depends upon the accuracy of the mathematical model. One common mathematical model is the Parker Model. This model assumes that all of the energy from the pulse is deposited in an infinitesimally thin layer at the surface of the material and negligible heat is lost to the environment. These assumptions simplify the model, making it easy to use, but introduce errors into the measured thermal diffusivity.Computational methods can quantify these inaccuracies. Factors including heat lost to the environment, the temporal profile of the laser pulse, and the spatial distribution of the deposited energy can be incorporated into heat transfer simulations. Higher fidelity mathematical models can also be developed to account for these complexities. This project includes these and other factors to make simulations as realistic as possible. Various mathematical models, such as the Parker Model and higher fidelity models, are then used to calculate the thermal diffusivity from the resulting time-dependent temperature profiles and their measurements are compared to the simulated material’s true thermal diffusivity. Repeating this process for several material types will allow the precision of the models to be analyzed for each case. This analysis will be summarized at the conclusion of this project, providing a framework for developing more precise thermal diffusivity measurement techniques.
Subtle Force Communication for Intuitive Human-Robot Co-manipulation
Authors: Samuel Charles. Mentors: Marc Killpack. Insitution: Brigham Young University. Robots have an incredible potential to help humans in extreme or dangerous situations due to their significant and consistent durability, strength, endurance, replaceability, etc. However, humans and robots move very differently, leading to difficulties working intuitively with a robot partner when completing a task such as lifting a heavy object. We recently conducted studies in which human subjects moved a 60-lb table to several different positions in a room; we recorded force and torque data, along with many other aspects of the movement. In these studies of human-human co-manipulation, we noticed a trend during particularly difficult maneuvers; when lifting the table to high positions or acute angles, subjects switched their hand holds on the table’s handles. This was likely an easier method of holding the table, but it may have also communicated placement, stability, understanding, and strength to the other partner, leading to a smoother and more intuitive movement and experience overall. If this is the case, this data could be used to help a co-manipulation robot both effectively understand the subtle commands in human movement and intuitively communicate needed movement to the human partner. This is particularly useful in emergencies like natural disaster sites and war zones, in which immediate help is needed, but there is no time to troubleshoot an unclear or unintuitive robot.
Ransomware Resilience and Ethical Dilemmas: A Comprehensive Review of Threat Landscape, Impact, and Mitigation Strategies
Authors: Asmaa Alsharif. Mentors: Sayeed Sajal. Insitution: Utah Valley University. Ransomware is malicious software which restricts users from accessing their computer and personal data through encryption. Ransomware attacks target a variety of victims including individuals, organizations, businesses, and governments. Attackers take control of this sensitive data and hold it hostage until a ransom is paid. Whether or not ransom is paid, the attackers’ damage can be irreparable. It includes but is not limited to exposing personal information, identity theft, blackmail, permanent deletion or corruption of valuable data, and Distributed Denial-of-service (DDos). Moreover, the financial implications of ransomware attacks are substantial. Victims face not only the direct costs of ransom payments, but also the expenses associated with system restoration and the potential fines for data protection violations. Furthermore, an ethical dilemma emerges from the response to extortion, as victims must grapple with the moral implications of potentially supporting criminal activities by paying ransoms to recover their data. This raises questions about the broader responsibility of stakeholders in cybersecurity and their duty to protect data privacy. The first ransomware attack emerged in 1989 and since then, ransomware attackers have persisted as a cybersecurity threat, striking at the core of data privacy. This paper delves into the multifaceted impact of ransomware incidents on data security, current ransomware techniques and strategies, and the ethical dilemmas they raise.Because ransomware attackers are constantly evolving their methods, it is important to routinely review the current literature to fully understand the tactics and techniques used by these cybercriminals. This research paper will review, analyze, and synthesize the existing literature about ransomware. It is crucial for individuals and organizations to be proactive, understand the nature of the threat, and take precautions to protect themselves and their data. This study encourages further research and discourse on the multifaceted impact of ransomware attacks on data privacy and the pursuit for effective countermeasures.
The Impact of Augustine’s Theory of Evil on Latter-Day Saint Teachings
Authors: Miranda Judson. Mentors: Mike Ashfield. Insitution: Utah State University. This research explores the relationship between Latter-Day Saint views and classical Christianity through the lens of the problem of evil. More specifically, this research looks at the implications both Augustine’s theory of evil and Latter-Day Saint theology have on each other and the impact that would occur should certain aspects be accepted by Latter-Day Saints. I argue that Latter-Day Saints should accept an interpretation of doctrine through the privation theory of evil in order to avoid potential threats to God’s goodness and the Plan of Happiness. Understanding Latter-Day Saint doctrine through this lens will better situate further research in the broader context of Christianity. Furthermore, accepting this theory will allow for a deeper understanding of the premortal existence and postmortal states. This deeper understanding is beneficial for both adherents to and researchers of Latter-Day Saint teachings. Finally, I argue this interpretation does not pose a threat to the doctrine of “opposition in all things.”
The Folklore of the Ghastly Menace
Authors: Matilda Gibb. Mentors: Ami Comeford. Insitution: Utah Tech University. Prior to 1930, cannabis was used frequently in therapeutic practices and was found in many over the counter health products. However, around this time what has been dubbed the “pot panic” began infecting American citizens. It was then the film Reefer Madness came onto the scene, exacerbating the public’s trepidation over the allegedly morally degrading effects cannabis inflicted. Putting melodrama at the forefront, the film exploits societal queasiness over sex, drugs, violent crime, and immigration to push its propagandist agenda. The fear mongering rhetoric and imagery used effectively poisoned the public and altered societal perceptions of medical and recreational cannabis use. The goal of this project is to analyze the rhetorical argument presented in the film and address its broader reach and effects. Lastly, I will argue that the message presented in Reefer Madness was not based on any logical argument; it was based on folklore.
More Than Accommodating
Authors: Natalya Misener, John Wolfe. Mentors: John Wolfe. Insitution: Utah Tech University. In our effort to improve education and create a more inclusive learning environment, Natalya Misener and Professor John Wolfe from Utah Tech University are exploring the experiences of neurodivergent students in classrooms. Our goal is to better understand, support, and empower these students, challenging the way things are typically done. We believe it's important to understand that these conditions come in a variety of forms and affect both students and professors in the classrooms in ways that are not currently being discussed, especially as many minorities, especially women and people of color, go undiagnosed and unaccommodated for a variety of reasons. We believe that by educating professors and students alike on how classroom environments can be more supportive of neurodivergent students, we can benefit the entire classroom, including ‘typical’ students. To make this happen, we believe it's important to rethink how we organize our classes, assignments, and interactions, so that we don't have to rely too much on services like the Disability Resource Center (DRC). We have practical suggestions, like adjusting assignments and deadlines and creating a comfortable classroom environment. It's also crucial to be clear in our expectations and listen to what the students need. We ask some important questions, like whether students should consider their instructors' limitations and how students see disabled instructors in their journey to graduation as well. We call for a shift beyond just accommodating neurodivergent students. We want to encourage educators to think about how they can better support all students in their classrooms. It's urgent that we change how we approach neurodivergent students- in order to create a more inclusive, supportive, and successful environment for us all.
Ethical Analysis of Web3 and Decentralized Internet
Authors: Karli Kallas. Mentors: Jared Colton. Insitution: Utah State University. For this presentation, I will present a care ethics analysis of Web3 and decentralized internet, with a specific focus on privacy concerns and the increase of internet scams due to the lack of regulation and accountability. Ethics of care argues that there is moral importance in promoting the well being of care givers and receivers in a network of social relations, recognizing that all human life is co-dependent. It is important to note that care ethics was not designed to engage with such large networks of relationships, I will be expanding it to apply to the situation, so it gives us one new way of viewing the situation. I will expand on this ethical lens by including points from care ethics such as the empathy in the design of Web3, and responsibility for online communities. My research explores the intricate web of relationships and responsibilities in the decentralized environment, aiming to shed light on the vulnerabilities within internet scams in contrast to the benefits of cryptocurrency. I examine the moral dimensions of these scams, considering the impact on individuals and the broader digital community. The emphasis on care ethics demonstrates the interconnectedness of actors within Web3, as well as the need for a collective commitment to certain forms of innovation.
Rotational Dynamics of Thrown Food Items and Their Impact on Canine Catching Success
Authors: Caleb Slade, Evelyn Davis, Michael Orr, Brevin Bell. Mentors: Vinodh Chellamuthu. Insitution: Utah Tech University. Our study examines a viral video featuring a dog named Fritz who consistently fails to catch thrown food items. This research aims to determine whether Fritz's inability to catch is a result of his owner's throwing technique or Fritz's own inherent clumsiness. To investigate this, we employ a mathematical model based on kinematic differential equations to analyze the rotational dynamics of various geometrical shapes that simulate the thrown food items. By pinpointing moments in the projectile trajectory where the food items are most easily caught, we provide empirically-based recommendations aimed at improving Fritz's success rate.
Lake Bonneville's Legacy: Unraveling Genetic Drift and Diversity Loss in West Desert Bonneville Cutthroat Trout
Authors: Tanner Van Orden, Dennis Shiozawa, Peter Searle, Ana Kokkonen, Paul Evans. Mentors: Paul Evans. Insitution: Brigham Young University. The Bonneville cutthroat trout (Onchorynchus clarki utah) is the only trout species native to the Northern and Southern Snake ranges in Eastern Nevada. Streams on the east side of both ranges flowed into Lake Bonneville 12,000 – 15,000 years ago when Lake Bonneville was at its maximum. As Lake Bonneville retreated, Bonneville cutthroat trout in these streams were left isolated. To better understand how Bonneville cutthroat trout in the study area are coping with a changing climate, we Investigated the genetic diversity of Bonneville cutthroat in the Northern and Southern Snake ranges and compared them to historic samples. We found highly differentiated cutthroat trout populations in close proximity to each other and a genetic diversity loss of up to 86.3% in the last 12 years.
Effect of Print Parameters on tensile and flexure strength of 3D printed plastic
Authors: Baylee Schumacher, Ryley Horrocks, Divya Singh. Mentors: Divya Singh. Insitution: Utah Tech University. 3D printed plastics have gained immense popularity in the recent times given the direct integration with CAD based software platform as well as ease of manufacturing. In addition, use of 3D printed parts can be more economical and less labor intensive.However, the mechanical behavior of these plastics is not well defined. There are many factors that affect the integrity of 3D printed plastic – extrusion temperature, infill pattern and infill percentage being a few of them.In this work, authors have attempted to study the effect of these factors on the tensile and flexure (bending) strength of 3D printed plastics. Firstly, tensile and bending samples are prepared by varying on more of the following – extrusion temperature, infill patter and infill percentage. Further, the specimens are tested in a destructive manner for tensile and flexure strength following the standard tensile and three-point bending technique on a Materials Testing System. The results on the variation in the strength as a function of print parameters provides an insight on the importance of standardizing these parameters to maximize strength and minimize cost.Keywords: additive manufacturing, 3D printed plastic, tensile strength, flexure strength
Is this you? Foucauldian ethics in multifactor authentication
Authors: Maren Archibald. Mentors: Jared Colton. Insitution: Utah State University. Two-factor authentication was in use as early as 1994 with a patent belonging to telecommunications company Ericsson, which described the tech as “an authentication unit which is separate from preexisting systems." Today, multifactor authentication (MFA) requires a user to prove their identity by way of their knowledge, through a password or one-time code; their possession, through a physical key; or their inherence, through biometrics like a fingerprint or an eye scan. The secure sign-in method has proliferated in recent years, but while significant time has been dedicated to refining it, little if any research has been done in relation to its ethical implications. In his scholarship on discipline, Michel Foucault writes of “examination that places individuals in a field of surveillance.” MFA is one such method of surveillance — various forms track users’ location data, common usage and login hours, and biometric information. In my presentation, I will apply Foucauldian ethics to show how MFA is uniquely situated among other forms of societal documentation because of its purpose. The very data that depersonalizes users into numbers is also meant to be so precise and personal that it is the only way to believe users are who they say they are. And beyond MFA’s treatment of individual users looms the widespread collapse of a distinction between privacy and security. These values are not incompatible, but organizations have implemented MFA in a way that requires users to sacrifice privacy in order to gain security. My research will show how MFA aggrandizes the power differential between users and tech giants and threatens the ability to simultaneously maintain privacy and digital identity.
Three-Dimensional Construction of Coronary Vasculature Geometries
Authors: Aksel Anderson, Lindsay Rupp, Anna Busatto, Rob MacLeod. Mentors: Rob MacLeod. Insitution: University of Utah. Cardiovascular disease is the leading cause of death globally, and one of the most impactful subsets is coronary artery disease (CAD). CAD occurs when an obstruction(s) in the arteries fails to supply the heart with sufficient blood flow, ultimately resulting in tissue death. Understanding the geometric structure of the heart’s vasculature can provide insight into the development of CAD. However, previous research has only captured vasculature geometries for the main coronary branches, neglecting the downstream vasculature. Therefore, capturing the downstream vasculature would offer researchers a more comprehensive model to study CAD. Our study developed a method to efficiently obtain subject-specific, comprehensive vasculature geometries. First, we obtained five computed tomography (CT) scans of explanted porcine hearts with the coronaries highlighted via a contrast agent. From these CT images, we developed a novel method to efficiently capture the vasculature geometry of each subject. Once we obtained the final geometries, we computed two metrics to determine the extent of the captured vasculature: (1) the number of vessel segments and (2) the smallest vessel radius. We obtained an average vessel segment count of approximately 169 +/- 63 vessels and a smallest vessel radius of approximately 0.44 +/- 0.15 mm. We were able to successfully capture vessels over 85% smaller than the largest porcine coronary artery with a radius of approximately 3.5 mm. Our methodology will help researchers and clinicians obtain comprehensive vascular geometries to enhance the study and treatment of CAD.
Perceived Fatigue and Physical Activity Enjoyment Following Indoor and Outdoor Moderately Heavy Superset Resistance Training
Authors: Korina Ziegler, Aaron McKenzie, Wesley Ziegler, Spencer Maxwell, Bryson Carrier, Charli Aguilar, Alexandra Routsis, Talon Thornton, Jae Bovell, Setareh Star Zarei, Devin Green, Amanda Hawkes, Jeffrey C Cowley, Merrill Funk, James Navalta, Marcus M Lawrence. Mentors: Marcus Lawrence. Insitution: Southern Utah University. ACSM has again determined that resistance training (RT) and outdoor activities are two of the top ten worldwide fitness trends for 2023. We previously found that RT outdoors had a significantly lower perception of effort (RPE) compared to indoor RT, despite no physiological differences in heart rate (HR) and energy expenditure (EE). However, no study has examined other feelings during RT in indoor or outdoor settings. PURPOSE: To determine how indoor or outdoor environments effect perceptions of fatigue and physical activity enjoyment following RT in recreationally resistance trained adults. METHODS: Twenty-three adult participants (n=10 female, n=13 male) completed this study. The Visual Analog Scale Fatigue (VAS-F) measured perceived fatigue and the Physical Activity Enjoyment Scale – Short Version (PACES-S) measured PA enjoyment, and both were measured at baseline and then immediately following an acute session of indoor or outdoor RT. HR was obtained from a chest strap (Polar H10) and EE from a Portable Metabolic Cart (COSMED K5). Randomly in indoor and outdoor settings, participants completed 4 supersets of the reverse lunge and shoulder press exercises using dumbbells at a light (2 sets) and moderately heavy (2 sets) intensity with 1 superset of 6 repetitions per exercise and 1 min rest between supersets. A paired T-test (for HR & EE comparisons) or one-way repeated measures ANOVA with Sidak post-hoc test (for VAS-F & PACES-S comparisons) were used to determine differences (p<0.05). RESULTS: No significant differences were observed between indoor and outdoor RT for the physiological variables of average HR (129.4±17.2 and 127.75±23.3 bpm, respectively, p=0.66) and EE (30.6±11.5 and 28.3±9.9 kcals, respectively, p=0.06). Perceived fatigue significantly (p<0.0001) increased from baseline (1.13±0.94 arbitrary units, AU’s) following indoor (4.54±1.91 AU’s) and outdoor (3.99±1.54 AU’s) RT, but no environmental differences (p=0.36) were observed. PA enjoyment was not significantly (p range: 0.27-0.93) different between baseline (18.73±1.83 AU’s) and following indoor (18.18±1.99 AU’s) or outdoor (18.36±1.99 AU’s) RT. CONCLUSION: In recreationally resistance trained adults, moderately heavy superset RT in indoor or outdoor settings does not alter perceived fatigue or physical activity enjoyment.
Addressing Health Care Access Disparities: Bridging the Gap for a Healthier Society
Authors: Jayden Peacock. Mentors: Jodi Corser. Insitution: Southern Utah University. Healthcare access disparities remain a persistent and pressing issue in the community, disproportionately affecting marginalized communities. These disparities include aspects such as geographic location, socioeconomic standing, and ethnicity, all of which can hinder individual and societal growth. Addressing these disparities is vital as it can improve the well-being of individuals and therefore society. Examining the nature of healthcare access disparities helps present a holistic approach that can help bridge the gap, providing a more equitable healthcare system. Healthcare disparities are deeply embedded in the network of social and structural inequalities, driven by a complex interplay of geographical, economic, and cultural factors. Such disparities create health gaps, where individuals from marginalized communities are disproportionately burdened by a lack of access to essential medical services. The historical neglect of underserved communities has contributed to the persistence of these disparities, necessitating a comprehensive and transformative approach to address the problem.Effectively combating healthcare access disparities includes implementing community involvement that empowers underserved communities, enacting healthcare policy reforms to address systemic barriers, and leveraging locum tenens to improve access to care in all areas. Addressing these aspects will lead to a more equitable and inclusive healthcare system, that results in improved health outcomes and well-being for all individuals. Addressing healthcare access disparities is essential for individual and community improvement. Through community engagement, healthcare reforms, and the use of locum tenens a comprehensive approach can be represented in regards to a more equitable and inclusive healthcare system.
The Cold, Hard Truth: Cryopreserved Tissue is Superior to FFPE Tissue in Molecular Analysis
Authors: Ken Dixon, Jack Davis, DeLayney Anderson, Mackenzie Burr, Peyton Worley, Isaac Packer, Bridger Kearns, Jeffrey Okojie. Mentors: Jared Barrott. Insitution: Brigham Young University. IntroductionPersonalized cancer care requires molecular characterization of neoplasms. While the research community accepts frozen tissues as the gold standard analyte for molecular assays, the source of tissue for all testing of tumor tissue in clinical cancer care comes almost universally from formalin-fixed, paraffin-embedded tissue (FFPE). Specific to genomics assays, numerous studies have shown significant discordance in genetic information obtained from FFPE samples and cryopreserved samples. To explain the discordance between FFPE samples and cryopreserved samples, a head-to-head comparison between FFPE and cryopreserved tissues was performed to analyze the DNA yield, DNA purity, and DNA quality in terms of DNA length.MethodsHuman (n = 48) and murine tissues (n = 10) were processed by traditional formalin fixation and paraffin embedding or placed in cryovials containing HypoThermosol solution. 19 human-matched samples were included. These cryovials were cooled to -80°C slowly and stored in liquid nitrogen until the time of the study. DNA was extracted using the same protocol for both tissue types except that tissues embedded in paraffin were first dewaxed using a xylene substitute followed by a multistep rehydration protocol using ethanol and water. Samples were weighed and calibrated to have the same starting mass. After the column purification, samples were eluted in 20 µL and concentration and purity were measured on a Nanodrop. Purity was determined by calculating the 260/280 ratio. DNA fragment length was measured on an Agilent Fragment Microelectrophoresis Analyzer.ResultsGraded amounts of tumor tissue (5- >50 mg) were used to determine the lowest starting material needed to extract 40 ng/mg of DNA. The average for both sample types reached the minimal threshold of 40 ng/mg. However, 74% of FFPE specimens failed to meet the minimum 40 ng/mg, whereas only 21% were below the threshold in the cryopreserved samples (Figure 1). In the cryopreserved group, the average DNA yield was 222.1 ng/mg, whereas 52.8 ng/mg was obtained from FFPE tissue. For DNA purity in cryopreserved tissues, the 260/280 ratio range was 1.09-2.13 with a mean of 1.79. The 260/280 ratio range in FFPE tissues was 0.85-2.76 with a mean of 1.65 (Figure 2). The DNA Quality Number(DQN) is a measurement of DNA fragment length and the percentage that exceeds the threshold of 300 bp. For FFPE, the DQN was 4.4 compared to a DQN of 9.8 for the cryopreserved samples (Figure 3). Setting a higher threshold of DNA length to 40,000 bp and measuring the area under the curve (AUC), it was observed that cryopreserved samples were 9-fold higher in fragments greater than 40,000 bp (Figure 4).ConclusionsCryopreserved cancer tissue provides superior quality assurance measurements of DNA over FFFPE. Treatment decisions based on molecular results demand accuracy and validity. The pathology community should support efforts to cryopreserve cancer biospecimens in the clinical setting to provide valid molecular testing results. The automatic pickling of tumor specimens in formalin is no longer an acceptable default.
Differences in Ultrasound Elastography Measurements Of The Patellar Tendon Using Pad vs No Pad and Of Dominant vs Non Dominant Legs
Authors: Ashley Allan, Mikayla Kimball, Noah Bezzant, Brent Feland, Josh Sponbeck. Mentors: Brent Feland. Insitution: Brigham Young University. BACKGROUND: Recent studies have shown that there are differences bilaterally in the cross sectional area of the patellar tendon for lead vs non lead extremities of athletes. Yet, little research can be found as to whether there is a difference that develops over one’s lifetime between the stiffness of the patellar tendons in the dominant vs non-dominant legs. Reliability has not yet been established for elastography in the patellar tendons, so we are continuously striving to gather more reliable data on shear-wave elastography of the patellar tendon. PURPOSE: The aim of this study was to assess whether there is a difference in the average patellar tendon stiffness as measured by ultrasound elastography using a pad vs no pad and differences between self reported non-dominant vs dominant knee of senior athletes over the age of 50. Dominance taken as reported in a modified KOOS (Knee Injury and Osteoarthritis Outcome Score) survey. METHODS: Data was collected from 15 active, senior aged volunteers at the Huntsman World Senior Games in St George, Utah, 2023. All subjects (mean age= 67.29 ± 6.26 yrs, height=175.44 ± 8.18 cm, weight=87.40 ± 12.21 kg) signed an approved consent and completed a modified KOOS survey. Following, they were seated on a table, with their backs against the wall directly behind them. They were seated so that their lower legs were hanging off of the table in a relaxed position. The patellar tendon was then imaged with a long axis view using ultrasound elastography. ANALYSIS: All data were analyzed using JMP ver16.2 with a repeated measured analysis of variance (ANOVA) to determine if differences existed between pad and no pad and between dominant and non- dominant legs. RESULTS AND CONCLUSION: There was a significant difference between (p=.0423) pad and no pad patellar tendon measurements, but no significant difference when comparing sides combined with pad and no pad, between dominant and non-dominant legs, although a trend for significance did exist, and we suspect that with more subjects analyzed, we will get more significance.
Concurrent Validity of Heart Rate Measurements by Bicep Worn Polar Verity Sense and OH1 Devices During Moderately Heavy Resistance Training
Authors: Marcus M Lawrence, Merrill Funk, Jeffrey C Cowley, Amanda Hawkes, Aaron McKenzie, Alexandra Routsis, Wesley Ziegler, Talon Thornton, Spencer Maxwell, Korina Ziegler, James Navalta. Mentors: Marcus Lawrence. Insitution: Southern Utah University. American College of Sports Medicine has again found that wearable technology and resistance training (RT) are two of the top 5 fitness trends in 2023. Our lab recently found that the bicep-worn Polar Verity device was valid and reliable for measuring average and maximal heart rate (HR) during light intensity circuit RT. However, no study has examined other bicep worn devices during RT while also examining heavier intensities. PURPOSE: To determine the concurrent validity of identical Polar OH1 (x2) and Verity Sense (x2) bicep-worn devices in recording average and maximal HR following moderately heavy RT. METHODS: Twenty-one adult participants completed this study (n=10 female, n=11 male; age: 26.1±9.2 yrs; height: 171.3±9.4 cm; mass: 71.4±18.2 kg; RT experience: 5.7±4.9 yrs). The four bicep devices (Polar OH1 x2 and Polar Verity Sense x2) were worn along with the Polar H10 chest strap, criterion for HR. Participants completed 8 supersets of the reverse lunge and shoulder press exercises using dumbbells at a light (4 sets) and moderately heavy (4 sets) intensity with 1 superset of 6 repetitions per exercise (12 repetitions per superset) and 1 min rest between supersets. Data was analyzed for validity (Mean Absolute Percent Error [MAPE] and Lin’s Concordance Coefficient [CCC]), with predetermined thresholds of MAPE<10% and CCC>0.70. Paired t-tests were used to determine differences (p<0.05). RESULTS: For average or maximal HR, neither the Polar Verity Sense 1 (127.2±17.8 or 151.5±16.7bpm) or 2 (125.7±18.8 or 147.9±18.9bpm) or the Polar OH1 1 (128.7±18.5 or 154.5±18.1bpm) or 2 (129.5±18.2 or 156.4±17.4bpm) were significantly (p range: 0.14-0.97) different than the criterion (128.6±19.2 or 149.3±18.0bpm). However, the Polar Verity 1 and 2 were not considered valid for average HR (MAPE range:16.17-17.57%; CCC range: 0.07-0.13) or maximal HR (MAPE range: 11.60-13.33%; CCC range: 0.02-0.29). The Polar OH1 1 and 2 devices were not considered valid, either, for average HR (MAPE range: 17.22-17.25%; CCC range: 0.08-0.09) or maximal HR (MAPE range: 13.24-13.92%; CCC range: .024-0.27). CONCLUSION: Despite our lab previously finding the Polar Verity as valid for HR measurements during light intensity RT, the current bicep-worn devices should not be utilized during heavier intensity RT for accurate HR measurements. Individuals resistance training and utilizing bicep-worn devices for heart rate should use them cautiously.
Transcriptomic Analysis of B cell RNA-seq Data Reveals Novel Targets for Lupus Treatment
Authors: Sehi Kim, Naomi Rapier-Sharman, Michael Told. Mentors: Brett Pickett. Insitution: Brigham Young University. Systemic Lupus Erythematosus is an autoimmune disease that produces autoantibodies affecting various body regions, including skin, joints, kidneys, brain, aerosol surfaces, blood vessels, etc., resulting in damaging organs and tissue. Patients commonly experience an elevated risk of bleeding or blood clotting, joint stiffness, pain, fatigue, and depression.Our study involved the collection of RNA seq data of B cells of both SLE patients and healthy people from the National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO) database. Subsequently, we employed the Automated Reproducible MOdular Workflow for preprocessing and differential analysis of RNA-seq data (ARMOR) workflow. The differentially expressed genes identified by ARMOR were then analyzed using SPIA (Signaling Pathway Impact Analysis) algorithm to find the pathways associated with lupus. We further utilized the Pathways2Targets algorithm to predict potential lupus treatments based on known protein-drug interactions.In our study on lupus patients, analysis using ARMOR, SPIA, and Pathways2Targets identified 10,000 differentially expressed genes and revealed their modulated pathways, providing insights into molecular cascades in lupus. Furthermore, we identified potential drug targets, finding the way for therapeutic interventions that ultimately led to the discovery of new drug treatments. We anticipate that our findings could be utilized for the benefit of lupus patients, further advancing personalized medicine strategies, holding promise for improving the quality of life for individuals grappling with this complex autoimmune disorder.
Ovarian Exosomal Therapy for Nuerological Health in mice
Authors: Nathan McCoy. Mentors: Jeff Mason. Insitution: Utah State University. Aging-associated changes in motor function often leads to the development of musculoskeletal tremors. In women, the development/severity of tremors is causally related to ovarian failure atmenopause. In the laboratory, mice can serve as an effective model for the development of aging-associated tremors. Based on our previous studies, ovarian somatic tissues transplanted from young mice to old mice significantly decreased the tremor amplitudes and lowered levels ofgliosis in the brains of the older recipient mice, compared to age-matched control mice. The study was carried out using both germ-cell-containing and germ-cell-depleted ovarian tissue. Neurological improvement and overall health were achieved using both types of tissue with similar results indicating that it may be a non-hormonal influence that is responsible for this phenomenon. This study is aimed to identify which properties of ovarian tissue causes these neurological health benefits to occur. Ovarian tissues excrete exosomes, vesicles that can befilled with miRNA which are transported throughout the body. We aim to isolate these exosomes from ovarian tissues using density gradient based centrifugation and have them introduced via injection intraperitoneally into mice to see if the same neurological improvements are achievedas it was done in mice with ovarian somatic tissue transplants. If such improvements are corroborated then ovarian exosomes will be sequenced to identify which miRNA sequences signal the body to undergo these health improvements.
Repetition Count Concurrent Validity of Various Garmin Wrist Watches During Light Circuit Resistance Training
Authors: Wesley Ziegler, Spencer Maxwell, Aaron McKenzie, Talon Thornton, Alexandra Routsis, Korina Ziegler, Jae Bovell, Devin Green, Bryson Carrier, James Navalta, Setareh Star Zarei, Kaye Lavin, Jeffrey C Cowley, Amanda Hawkes, Merrill Funk, Marcus M Lawrence, Charli Aguilar. Mentors: Marcus Lawrence. Insitution: Southern Utah University. Wearable technology and strength training with free weights are two of the top 5 fitness trends worldwide. However, minimal physiological research has been conducted on the two together and none have measured the accuracy of devices measuring repetition counts across exercises. PURPOSE: The purpose of this study was to determine the concurrent validity of four wrist-worn Garmin devices, Instinct (x2), Fenix 6 Pro, and Vivoactive 3, to record repetition counts while performing 4 different exercises during circuit resistance training. METHODS: Twenty participants (n=10 female, n=10 male; age: 23.2 7.7 years) completed this study. Participants completed 4 circuits of 4 exercises (front squat, reverse lunge, push-ups, and shoulder press) using dumbbells at a light intensity with 1 set of 10 repetitions per exercise and 30 seconds rest between exercises and 1-1.5 min rest between circuits. Mean absolute percent error (MAPE, ≤10%) and Lin’s Concordance Coefficient (CCC, ρ≥0.7) were used to validate the device’s repetitions counts in all exercises compared to the criterion reference manual count. Dependent T-tests determined differences (p≤0.05). RESULTS: No devices were considered valid (meeting both the threshold for MAPE and CCC) for measuring repetition counts during front squats (MAPE range: 3.0-18.5% and CCC range: 0.27-0.68, p value range: 0.00-0.94), reverse lunge (MAPE range: 44.5-67.0% and CCC range: 0.19-0.31, p value range: 0.00-0.28), push-ups (MAPE range: 12.5-67.5% and CCC range: 0.10-0.34, p value range: 0.07-0.83), and shoulder press (MAPE range: 18.0-51.0% and CCC range: 0.11-0.43, p value range: 0.00-0.79) exercises. CONCLUSION: The wearable wrist-worn devices were not considered accurate for repetition counts and thus manual counting should be utilized. People who strength train using free weights will need to wait for either improved repetition counting algorithms or increased sensitivity of devices before this measure can be obtained with confidence.
Antioxidant Combinatory Cytomegalovirus Treatment
Authors: Kade Robison, David Britt, Elizabeth Vargis. Mentors: David Britt. Insitution: Utah State University. Cytomegalovirus (CMV) is the leading cause of sensorineural hearing loss, the most prevalent form of permanent hearing loss, worldwide. CMV treatment requires long term administration of nucleoside analog antivirals such as ganciclovir (GCV). Although ganciclovir effectively inhibits CMV, it also inhibits neutrophils, an essential component of the immune system, reducing optimal treatment duration. Previous studies have demonstrated that ganciclovir toxicity can be reduced while maintaining effective CMV inhibition by combining subtherapeutic doses of ganciclovir with quercetin, an FDA approved hydrophobic flavonoid with antiviral properties, solubilized with a mitochondria-targeting drug delivery vehicle, Poloxamer 188 (P188). Further efforts have been made to optimize the combinatorial ganciclovir with quercetin encapsulated in P188 (GCV-QP188) treatment by exploring the potential benefits of adding antioxidant vitamins to the GCV-QP188 treatment. One of the pathways by which CMV induces hearing loss is the generation of excess reactive oxygen species, specifically in the mitochondria. Current literature suggests that the toxic effects of the reactive oxygen species produced by CMV in the could be reduced via natural vitamin antioxidant treatments. Ascorbic acid, also known as vitamin C, was the first antioxidant vitamin investigated due to its synergistic antiviral properties when paired with quercetin to treat SARS-CoV-2. Yet, the addition of ascorbic acid into the combinatorial treatment was more toxic than the existing GCV-QP188 treatment. Current efforts are concentrated on assessing the effect of selectively delivering hydrophobic antioxidants to the mitochondria of CMV infected mouse fibroblast cells as targeted antioxidant delivery will require lower antioxidant concentrations, reducing associated toxicity. The addition of hydrophobic antioxidants retinol and alpha-tocopherol, vitamins A and E respectively, delivered via mitochondria-targeting P188 to the existing GCV-QP188 treatment is being investigated to determine if it will significantly improve GCV-QP188 treatment efficacy.
The Effects of Initiating a 24-hour Fast with a Low Versus a High Carbohydrate Shake on pancreatic hormones in the Elderly: A Randomized Crossover Study
Authors: Spencer Hawes, Katya Hulse, McKay Knowlton, Landon Deru, Bruce Bailey. Mentors: Bruce Bailey. Insitution: Brigham Young University. The aim of this study is to understand how the macronutrient composition of the fast-initiating meal influences glucose regulating hormones in older, sedentary, and abdominally obese adults. Insulin, amylin, and glucagon were measured immediately before and after a 24-hour fast, as well as 48 hours after fast initiation. Understanding these outcomes will inform fasting protocols such as time restricted eating and alternate day fasting, which offer potential long-term health benefits. 16 participants (7 male, 9 female) each completed two 24-hour fasts consuming only water. In random order, one fast began with a high carb shake and the other with a low carb shake of equal calories, volume and fiber density. After each fast, participants lived and ate normally and then returned 24 hours later. Venous blood draws were taken at hours 0, 1, 24, and 48 to monitor levels of insulin, amylin, and glucagon. There was a significant condition by time interaction for insulin (F = 4.08, P < 0.01), amylin (F = 3.34, P = 0.02) and glucagon (F = 7.93, P < 0.01). Insulin (P = 0.02) and amylin (P = 0.01) were higher and glucagon lower (P = 0.05) after consuming the high carbohydrate shake compared to the low carbohydrate shake. There was no difference, however, between conditions for insulin, glucagon or amylin at 0, 24 and 48 hours.
Internalized HIV stigma among women giving birth in Tanzania: A mixed-methods study
Authors: Anya Weglarz. Mentors: Melissa Watt. Insitution: University of Utah. AbstractBackgroundWomen living with HIV (WLHIV) commonly experience internalized HIV stigma, which refers to how they feel about themselves as a person living with HIV. Internalized stigma interferes with HIV care seeking behavior and may be particularly heightened during the pregnancy and postpartum periods. This thesis aimed to describe internalized HIV stigma among WLHIV giving birth, identify factors associated with internalized HIV stigma, and examine qualitatively the impacts of internalized HIV stigma on the childbirth experience.MethodsPostpartum WLHIV (n=103) were enrolled in the study between March and July 2022 at six clinics in the Kilimanjaro Region, Tanzania. Participants completed a survey within 48 hours after birth, prior to being discharged. The survey included a 13-item measure of HIV-related shame, which assessed levels of internalized HIV stigma (Range: 0-52). Univariable and multivariable regression models examined factors associated with internalized HIV stigma. Qualitative in-depth interviews were conducted with pregnant WLHIV (n=12) and postpartum WLHIV (n=12). Thematic analysis, including memo writing, coding, and synthesis, was employed to analyze the qualitative data.ResultsThe survey sample had a mean age of 29.1 (SD = 5.7), and 52% were diagnosed with HIV during the current pregnancy. Nearly all participants (98%) endorsed at least one item reflecting internalized HIV stigma, with an average endorsement of 9 items (IQR = 6). The most commonly endorsed items were: “I hide my HIV status from others” (87%), “When others find out I have HIV, I expect them to reject me” (78%), and “When I tell others I have HIV, I expect them to think less of me” (75%). In the univariable model, internalized stigma was associated with two demographic characteristics: being Muslim vs. Christian (ß = 7.123; 95%CI: 1.435, 12.811), and being in the poorest/middle national wealth quintiles (ß = 5.266; 95%CI: -0.437, 10.969). Internalized stigma was associated with two birth characteristics: having first birth vs. having had previous births (ß = 4.742; 95%CI: -0.609, 10.093), and attending less than four antenatal care appointments (ß = 5.113; 95%CI: -0.573, 10.798). Internalized stigma was associated with two HIV experiences: being diagnosed with HIV during the current pregnancy vs. diagnosis in a prior pregnancy (ß = 5.969; 95%CI: -1.196, 10.742), and reporting experiences of HIV stigma in the health system (ß = 0.582; 95%CI: 0.134, 1.030). In the final multivariable model, internalized stigma was significantly associated with being Muslim vs. Christian (ß = 6.80; 95%CI: 1.51, 12.09), attending less than four antenatal care appointments (ß = 5.30; 95%CI: 0.04, 10.55), and reporting experiences of HIV stigma in the health system (ß = 0.69; 95%CI: 0.27, 1.12). Qualitative discussions revealed three key themes regarding the impact of internalized HIV stigma on the childbirth experience: reluctance to disclose HIV status, suboptimal adherence to care, and the influence on social support networks.ConclusionWLHIV giving birth in this sample experience high rates of internalized HIV stigma. This stigma was significantly associated with being Muslim, as opposed to being Christian, attending less than four ANC appointments, and reporting experiences of HIV stigma in the healthcare setting. Other factors that were correlated to higher levels of internalized stigma were socioeconomic status, parity, and timing of HIV diagnosis, all of which can impact access to and engagement in healthcare services during the intrapartum and postpartum periods. Internalized HIV stigma impacts the childbirth experience for WLHIV, making the labor and delivery setting an important site for intervention and support.
The Effects of Dual-Task Activities on Language Fluency: Language Production While Driving
Authors: Alex Jarvis, Brooklyn Flowers, June Oaks, Sadie North. Mentors: Tyson Harmon. Insitution: Brigham Young University. Background: Dual tasks have been found to negatively affect language production for people with and without aphasia (Harmon et al., 2023). For people with aphasia (PWA) specifically, previous research suggests that limited working memory or attentional capacity contributes to their difficulty with language tasks (Harmon et al., 2019; Pompon et al., 2015; King & Karen 1996; Obermeyer et al., 2020). One common situation in which communication occurs within a dual task environment is talking while driving. Investigating how talking while driving impacts spoken language could help us better understand how to facilitate both safer driving behaviors and improved communication among friends and family while driving. Longer-term, this understanding can springboard further research addressing assessment and intervention practices in aphasia, which better reflect communication in daily life. Original findings related to speech acoustics revealed that talking while driving led to increased speech intensity and decreased speech time ratios (Glenn, 2017; Simmons, 2016). The potential impact of these driving tasks on spoken language, however, has not been investigated. For the present study, we will conduct a secondary analysis of language samples produced across the aforementioned conditions to understand how they impact spoken language production. Method: Data for the present study was collected for a larger project with initial aims of investigating the impact of driving on speech acoustics (e.g., frequency, intensity). This project also investigated bidirectional interference between speech acoustics and driving as well as the effects of different conversational modalities (e.g., talking on the phone, in person, or through Bluetooth). Forty healthy adults who reported no speech, language, or hearing impairment participated in the study. Participants completed seven tasks, which were presented in a random order: driving without speaking, speaking on a hand-held cell phone, speaking on a hands-free phone, talking to a person next to them speaking on a hand-held cell phone while driving, speaking on a hands-free phone while driving, and talking to a person next to them while driving. Within each of these conditions, participants discussed a topic they selected from a list while completing these tasks. To analyze spoken language, we will first transcribe samples orthographically. These transcriptions will then be coded for lexical-phonological, morphosyntactic, and macro-linguistic errors. Parametric statistical analysis will be used to compare across different age groups. Anticipated Results: We hypothesize that participants will demonstrate increased errors in conditions that involve talking while driving (i.e., dual task conditions) than in conditions that involve talking alone (i.e., single task conditions). Previous research suggests dual-tasking has a negative effect on language including lexical and phonological errors even in non-aphasic participants (Harmon et al., 2023). In this study, we would expect more lexical-phonological (e.g., fillers, revisions, repetitions) and macro-linguistic (e.g., aposiopesis) errors during dual task conditions. For future studies involving PWA’s, we would expect more impaired language in dual-task activities than those without aphasia.
Differences in Absolute and Relative Upper and Lower Body Strength Measures in Intermediate and Advanced Climbers
Authors: ANNA EDLER, RYAN KUNKLER, CASEY WEBB, JACOB MANNING, MARCUS M LAWRENCE. Mentors: Marcus Lawrence. Insitution: Southern Utah University. Climbing (sport and bouldering) has become a very popular hobby for people all over the globe.Now that climbing is an Olympic sport the need for understanding best training practices through evidence, not anecdotal experience, has grown. Using the International Rock Climbing Research Association (IRCRA) individual grading scale based on route completion difficulty,some research has shown that upper limb strength is important for individuals to progress from recreational/intermediate to more advanced/elite levels. However, many studies use non-sportspecific measurements (i.e., hand dynamometer versus a finger climbing hold) and none have assessed rate of force development (RFD) or lower body strength contributions. PURPOSE: To test the hypothesis that compared to recreational/intermediate climbers advanced climbers would have greater dominant and non-dominant upper-body strength and finger RFD as well as lower body compound strength. METHODS: Nineteen subjects (n=8 female and n=11 male;age: 24.7±7.5 yrs; height: 177.6±7.8 cm; mass: 76.0±14.9 kg; IRCRA Sport Grade: 14.1±6.7; n=10 intermediate, n=9 advanced) completed this study. During a single session, following a standardized 3-5 min. warm-up all participants dominant and non-dominant finger strength andRFD (using a Tindeq dynamometer load cell attached via static rope to a 20mm edge) as well as shoulder strength (using the same Tindeq load cell with a static rope attached to an olympic ring), and lower-body compound strength (isometric mid-thigh pull using G-strengthdynamometer load cell attached to a straight bar with a static rope) were assessed. Three trials were done on each measurement with 1 min. between trials and 3-5 min. between tests. Unpaired t-tests determined differences, p<0.05. RESULTS: Across every measurementadvanced climbers had significantly (p<0.05) higher values for absolute and relative (normalized to body weight, BW) measurements. As absolute and non-dominant results were similar we onlyreport dominant relative results, where appropriate. Indeed, compared to intermediate climbers advanced climbers had significantly higher relative dominant finger RFD (9.9±3.7 vs 20.8±9.4N/s*BW -1 ), finger strength (0.5±0.1 vs 0.7±0.3 kg/BW), shoulder strength (0.7±0.2 vs 0.9±0.2 kg/BW), as well as relative compound strength (1.8±0.4 vs 2.6±0.7 kg/BW, respectively). CONCLUSION: Advanced climbers have larger absolute and relative RFD in their fingers,stronger dominant and non-dominant fingers and shoulders, as well as stronger lower body compound strength. Thus, individuals looking to progress from recreational/intermediate climbing grades to advanced/elite should focus on improving total body absolute and relative strength as well as finger RFD.
Face it! How reliable is emotional facial expression coding within and across raters?
Authors: Anna Norman, Chloe Houghton, Macall Walker, Audrey Saunders. Mentors: Tyson Harmon. Insitution: Brigham Young University. Face it! How reliable is emotional facial expression coding within and across raters? Background Emotion, described as “physiological forces, located within individuals, that bolster our sense of uniqueness....” (Katriel, 2015, p. 57) is a critical aspect of day-to-day communication. For people with acquired language disorders post-stroke (i.e., aphasia), this interaction is particularly important due to relatively spared emotional processing, which has the potential to either facilitate or interfere with language processing (see e.g., Harmon et al., 2022; Ramsberger, 1996). The present study is part of a larger project, which seeks to determine whether people with aphasia exhibit more emotional facial expressions during personal narrative discourse than adults who do not have aphasia and whether these expressions are more emotionally arousing. The present study specifically seeks to investigate the reliability of facial coding by comparing average frequency and intensity of emotional facial expressions both within and across undergraduate student coders. Methods In order to quantify emotional facial expression frequency and intensity, undergraduate research assistants are trained to code facial expressions using a modified FACES protocol (Kring and Sloan, 2007). The modified protocol will be used to code emotional facial expressions of video footage that was obtained from participants while they told personal narratives (e.g., talking about an illness they experienced or an important life event). First, research assistants identify the baseline facial expression for each participant. Next, research assistants code transitions from a neutral expression to an emotional facial expression for valence (positive/negative) and intensity. Intensity ratings are scaled from 1 to 4 depending on how many units of the face are involved within the corresponding facial expression. Using this protocol, research assistants will begin facial coding after they are trained and demonstrate mastery by attaining 80% agreement with a master code. Upon completing initial data coding, research assistants will be assigned to recode 10% of previously completed video samples as well as 10% of samples that were previously coded by other coders. This secondary coding will be used to measure intra- and inter-rater reliability across dependent variables: frequency of emotional facial expressions, intensity of positive facial expressions, and intensity of negative facial expressions. Average frequency of emotional facial expressions will be calculated as the number of facial expressions produced per minute within a given sample. Intensity of positive and negative facial expressions will be calculated as the mean intensity within each valence respectively. The average frequency and intensity of initial and reliability codes will then be compared using Pearson’s correlation coefficient. Anticipated Results We anticipate that intra- and inter-rater reliability will be above 0.8. Through a strict training process, research assistants will calibrate their coding to achieve 80% agreement with the master code. We anticipate this training process to produce effective intra- and inter-rater reliability. Findings will be important for determining the reliability of facial coding procedures and trustworthiness of data for answering questions related to the longer-term project. References Harmon, T.G., Jacks, A., Haley, K. L., & Bailliard, A. (2020). How responsiveness from a communication partner affects story retell in aphasia: Quantitative and qualitative findings. American Journal of Speech-Language Pathology, 29(1), 142-156. https://doi.org/10.1044/2019_AJSLP-19-0091 Harmon, T.G., Nielsen, C., Loveridge, C., Williams, C. (2022). Effects of positive and negative emotion on picture naming for people with mild to moderate aphasia: A prelimariny investigation. Journal of Speech, Language, and Hearing Research, 64(3), 1025-1043. https://doi.org/10.1044/2021_JSLHR-21-00190 Katriel, T. (2015). Exploring emotion discourse. In H. Flam & J. Kleres (eds.), Methods of exploring emotions (1st ed., pp.57-66). Taylor & Francis Group. Kring, A.M., & Sloan, D.M. (2007). The facial expression coding system (FACES): Development, validation, and utility. Psychological Assessment, 19(2), 210-224. https://doi.org/10/1037/1040-3590/19.2.120
The Effects of a High Carbohydrate vs. High Fat Pre-Fast Meal on Incretin Hormone Secretion: A Randomized Crossover Study
Authors: Parker Graves, Landon Deru, Bruce Bailey. Mentors: Bruce Bailey. Insitution: Brigham Young University. Chronic illness such as strokes, heart disease and diabetes all rank among the leading causes of death in the United States. Recently, fasting has gained popularity as a means of preventing and treating chronic illness. PURPOSE: Fasting produces multiple beneficial physiological responses that have been shown to aid in chronic disease prevention, one of which is observed in relation to incretin hormones such as glucose-dependent insulin tropic polypeptide (GIP) and glucagon-like peptide-1 (GLP-1). These incretin hormones are released by the gut to augment the secretion of insulin to regulate postprandial glucose levels. During a fast, the decrease in incretin hormones, and resultant insulin levels can aid the body to regain insulin sensitivity. This can lead to more effective blood glucose management and chronic illness prevention. The purpose of the study was to determine the impact of an acute 24-hour fast started with either a high fat (HF) or high carbohydrate (HC) meal on plasma GIP and GLP-1. METHODS: Subjects were over the age of 55, had a BMI equal to or greater than 27, and had no diagnosed metabolic disorders or some other disqualifying medical issues. Using a randomized crossover design, each participant performed two 24-hour fasts. One fast beginning with a high carbohydrate meal and the other a high fat meal, both of equal calories. Venous blood draws were taken at 0, 1, 24, and 48 hours. RESULTS: GIP and GLP-1 (P < 0.001) were both elevated 1 hour after consuming the pre-fast meal in both conditions. In addition, both GIP (P = 0.0122) and GLP-1 (P = 0.0068) were higher in the high fat condition compared to the high carbohydrate condition at 1 hour. There were no significant differences between conditions for either GIP or GLP-1 at any other time point. CONCLUSION: As expected, both incretin hormones spiked postprandially. We did find that GIP and GLP-1 levels were significantly higher at 1-hour postprandial for the HF meal compared to the HC meal. This could give evidence to show how macronutrient levels can affect incretin secretion and alter sensitivity to insulin. However, the impact of the pre-fast meal on GLP-1 and GIP did not persist throughout the fast.
Investigating Gender Differences in Facial Expressiveness during Personal Narratives Using a Modified FACES Protocol
Authors: Leila Moore, Marin Farnsworth. Mentors: Tyson Harmon. Insitution: Brigham Young University. Background:Facial expressions are crucial for understanding human emotions in communication. Coding and quantifying these expressions, however, have often been subjective, leading to issues with reliability and consistency (Kring and Sloan, 2007). To address this, we have modified the Facial Expression Coding System (FACES) protocol (Kring and Sloan, 2007), which traditionally relies on subjective interpretation. Our modified protocol employs a more objective approach by quantifying facial expressions based on the counting of facial units (e.g., eyes, corners of the mouth, eyebrows, and teeth). Each intensity rating corresponds to a certain number of facial units. For example, an expression involving one facial unit would receive an intensity rating of one whereas an expression involving two units would receive an intensity rating of two. Multiple studies have found that women are more emotionally expressive than men overall when considering gestures, body language, facial expressions, and tone of voice (Ashmore, 1990; Brody & Hall, 1993; Hall, 1984). Rather than focusing on overall emotional expressiveness, though, the present study aims to understand how gender impacts the production of emotional facial expressions specifically. Method:The proposed study aims to compare the frequency and intensity of facial expressions produced by men versus women when recounting personal narratives. Participants. Video footage from participants with and without aphasia producing personal narrative discourse will be obtained for equal numbers of male and female participants. Samples will include two personal narratives in response to prompts from the interviewer. The first prompt is to speak of an experience the participant has had with illness/injury/stroke, and the second prompt is to talk about an important event that has happened in their life. Procedure. A modified FACES protocol will be used to code facial expressions in videos obtained from AphasiaBank. The coding process is conducted with the audio muted to help eliminate distractions. The modified FACES protocol includes specific, operationalized criteria for what qualifies as a facial expression. This protocol does not include coding non-facial gestures or eye movements. However, we do take into account the narrowing and widening of eyes in addition to instances when the eyes are opened or closed with intention. A critical question we ask during coding is whether the facial expression conveys emotional content. We have established standards for intensity ratings and will outline how we arrived at these standards, as well as the distinctions between intensity ratings one, two, three, and four. The analysis process includes establishing a baseline expression for each participant and coding for 20 minutes before taking a break to maintain energy and productivity and to ensure accuracy in data collection. Additionally, secondary coding and a final pass for gestalt ratings are performed to ensure comprehensive analysis.Data Analysis. To address our research questions, we will conduct a comprehensive analysis, focusing on our participants' facial expressiveness during personal narrative storytelling. The analysis will involve quantifying the frequency of facial expressions exhibited by the participants. Additionally, we will consider the valence of these expressions, distinguishing between positive and negative emotional content. We will also assess the intensity of facial expressions by counting the number of facial units engaged during each expression. This examination of facial data will allow us to discern not only the prevalence of expressions but also emotional content and intensity, ultimately providing a more in-depth understanding of the gender differences in non-verbal communication during personal narratives.Anticipated Results:Our study aims to explore if there is a statistically significant difference in the quantity, valence, and intensity of facial expressions between men and women. By using a standardized approach for measuring emotional facial expression production, we hope to shed light on the nuances of non-verbal communication during personal narratives and contribute to a better understanding of gender differences in emotional expression. Consistent with previous research, we anticipate that females will produce more frequent and intense emotional facial expressions than males (Ashmore, 1990; Brody & Hall, 1993; Hall, 1984).References:Ashmore, R. D. (1990). Sex, gender, and the individual. In L. A. Pervin (Ed.), Handbook of personality: Theory and research (pp. 486-526). New York: Guilford Press.Brody, L. R., & Hall, J. A. (1993). Gender and emotion. In M. Lewis & J. M. Haviland (Eds.), Handbook of emotions (pp. 447-460). New York: Guilford Press.Hall, J. A. (1984). Nonverbal sex differences: Communication accuracy and expressive style. Baltimore: Johns Hopkins University Press.Kring, A. M., & Sloan, D. M. (2007). The facial expression coding system (FACES): Development, validation, and utility. Psychological Assessment, 19(2), 210–224. https://doi.org/10.1037/1040-3590.19.2.210.