by Juliana Schifferes | Mar 19, 2023 | News
We are excited to announce that the Global Health and Education Projects, Inc. (GHEP) has been admitted as a member of the Open Access Scholarly Publishing Association (OASPA) based in The Hague, Netherlands, effective March 2023.
With this admission, GHEP Journals: the International Journal of Maternal and Child Health and AIDS (IJMA) and International Journal of Translational Medical Research and Public Health (IJTMRPH) join a cadre of journals published by organizations including scholar-led and professional publishers of books and journals, across varied geographies and disciplines, as well as infrastructure and other services.
OASPA is the international community for open-access publishing. It represents a diverse community of organizations engaged in open scholarship; OASPA works to encourage and enable open access as the predominant model of communication for scholarly outputs.
In her communication announcing GHEP’s membership approval, Lulu Stader, PhD, OASPA’s Membership Manager, said: “I am pleased to confirm that the Membership Committee has now approved the application as Scholar Publisher. Your membership is active immediately and your organization is now listed on our website as a member.”
OASPA is hallmarked as a trusted convenor of the broad, global spectrum of open-access stakeholders and a proven venue for productive collaboration.
OASPA membership means that GHEP Journals commit to adhere to meeting rigorous membership criteria, adhere to the organization’s code of conduct and bylaws, and comply with globally-accepted best practices for open-access journal publishing.
“GHEP is delighted to have been admitted to the prestigious organization,” said Dr. Romuladus Azuine, GHEP’s Executive Director. Right from our first day of publishing these two journals, GHEP has been committed to ensuring that its journals continue to grow in their impact and reach and in compliance with global open-access publishing standards, said Dr. Azuine, adding that “our authors should publish with our journals with the assurance that we will comply with international best practices.”
by Team GHEP | Jan 23, 2023 | Blog, Blog & News, News
Chatbots, ChatGPT, and Scholarly Manuscripts
WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications
January 20, 2023
Chris Zielinski1; Margaret Winker2; Rakesh Aggarwal3; Lorraine Ferris4; Markus Heinemann5; Jose Florencio Lapeña, Jr.6; Sanjay Pai7; Edsel Ing8; Leslie Citrome9; on behalf of the WAME Board
1Vice President, WAME; Centre for Global Health, University of Winchester, UK; 2Trustee, WAME; 3President, WAME; Associate Editor, Journal of Gastroenterology and Hepatology; Director, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India; 4Trustee, WAME; Professor, Dalla Lana School of Public Health, University of Toronto; 5Treasurer, WAME; Editor-in-Chief, The Thoracic and Cardiovascular Surgeon; 6Secretary, WAME; Editor, Philippine Journal of Otolaryngology-Head & Neck Surgery; 7Director, WAME; Working Committee, The National Medical Journal of India; 8Director, WAME; Section Editor, Canadian Journal of Ophthalmology; Professor, University of Toronto; 9Director, WAME; Editor-in-Chief, Current Medical Research, and Opinion; Clinical Professor of Psychiatry & Behavioral Sciences, New York Medical College
Journals have begun to publish papers in which chatbots such as ChatGPT are shown as co-authors. The following WAME recommendations are intended to inform editors and help them develop policies regarding chatbots for their journals, to help authors understand how the use of chatbots might be attributed to their work, and address the need for all journal editors to have access to manuscript screening tools. In this rapidly evolving field, we expect these recommendations to evolve as well.
A chatbot is a tool “[d]riven by [artificial intelligence], automated rules, natural language processing (NLP), and machine learning (ML)…[to] process data to deliver responses to requests of all kinds.”1 Artificial intelligence (AI) “broadly refers to the idea of computers that can learn and make decisions in a human-like way.”2 Chatbots have been used in recent years by many companies, including those in healthcare, for providing customer service, routing requests, or gathering information.
ChatGPT is a recently-released chatbot that “is an example of generative AI because it can create something completely new that has never existed before,”3 in the sense that it can use existing information organized in new ways. ChatGPT has many potential uses, including “summarising long articles, for example, or producing a first draft of a presentation that can then be tweaked.”4 It may help researchers, students, and educators generate ideas,5 and even write essays of reasonable quality on a particular topic.6 Universities are having to revamp how they teach as a result.7
ChatGPT has many limitations, as recognized by its own creators: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers…Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended… While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior.”8 And, “[u]nlike Google, ChatGPT doesn’t crawl the web for information on current events, and its knowledge is restricted to things it learned before 2021, making some of its answers feel stale.”9 OpenAI is currently working on an improved version that is “better at generating text than previous versions” and several other companies are creating their own “generative AI tools.”7
Chatbots are “trained” using libraries of existing texts. Consequently, in response to specific input from the human operator (a “question” or “seed text”), chatbots respond with an “answer” or other output. Ultimately, this output comprises a selection of the training materials adapted according to the algorithms. Since chatbots are not conscious,10 they can only repeat and rearrange existing material. No new thought goes into their statements: they can only be original by accident. Since chatbots draw on the library of existing texts on which they were trained, there is a risk that they might repeat them verbatim in some circumstances, without revealing their source. According to a recent preprint that used ChatGPT to generate text, “The percentage of correct references in the preliminary text, obtained directly from ChatGPT, was just 6%.”11 Thus, if chatbot output is to be published in an academic journal, to avoid plagiarism, the human author and editor must ensure that the text includes full correct references, to exactly the same degree as is required of human authors.
More alarmingly, ChatGPT may actually be capable of lying intentionally – “the intentionality is important, as the liar knows the statement they are making is false but does it anyway to fulfill some purpose…” as demonstrated by Davis.12 Of course, ChatGPT is not sentient and does not “know” it is lying, but its programming enables it to fabricate “facts.”
Chatbots are not legal entities and do not have a legal personality. One cannot sue, arraign in court, or punish a chatbot in any way. The terms of use and accepted responsibilities for the results of using the software are set out in the license documentation issued by the company making the software available. Such documentation is similar to that produced for other writing tools, such as Word, PowerPoint, etc. Just as Microsoft accepts no responsibility for whatever one writes with Word, ChatGPT’s creator OpenAI accepts no responsibility for any text produced using their product: their terms of use include indemnity, disclaimers, and limitations of liability.13 Only ChatGPT’s users would be potentially liable for any errors it makes. Thus, listing ChatGPT as an author, which is already happening14,15 and even being encouraged,16 may be misdirected and not legally defensible.
While ChatGPT may prove to be a useful tool for researchers, it represents a threat to scholarly journals because ChatGPT-generated articles may introduce false or plagiarized content into the published literature. Peer review may not detect ChatGPT-generated content: researchers can have a difficult time distinguishing ChatGPT-generated abstracts from those written by authors.17 Those most knowledgeable about the tool are wary: a large AI conference banned the use of ChatGPT and other AI language tools for conference papers.17
Looked at in another way, chatbots help produce fraudulent papers; such an act goes against the very philosophy of science. It may be argued that the use of chatbots resembles papermills albeit with a small difference — though the latter clearly has the intention to deceive, this may not always be true for the use of chatbots. However, the mere fact that AI is capable of helping generate erroneous ideas makes it unscientific and unreliable, and hence should have editors worried.
On a related note, the year 2022 also saw the release of DALE-E 2,18 another ML-based system that can create realistic images and art from a description submitted to it as natural language text, by OpenAI, the same company that has made ChatGPT. More recently, Google has also released a similar product named Imagen.19 These tools too have raised concerns somewhat similar to those with ChatGPT. Interestingly, each image generated using DALE-E 2 includes a signature in the lower right corner, to indicate the image’s provenance20; however, it can be easily removed using one of several simple methods that are a web search away.
With the advent of ChatGPT and DALE-E 2, and with more tools on the anvil, editors need to establish journal policies on the use of such technology and require the tools to be able to detect content it generates. Scholarly publishing guidelines for authors should be developed with input from diverse groups including researchers whose first language is not English. This may take some time. In the meantime, we offer the following recommendations for editors and authors.
WAME Recommendations:
- Chatbots cannot be authors. Chatbots cannot meet the requirements for authorship as they cannot understand the role of authors or take responsibility for the paper. Chatbots cannot meet ICMJE authorship criteria, particularly “Final approval of the version to be published” and “Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.”21A chatbot cannot understand a conflict of interest statement or have the legal standing to sign a statement. Chatbots have no affiliation independent of their creators. They cannot hold copyright. Authors submitting a manuscript must ensure that all those named as authors meet the authorship criteria, which clearly means that chatbots should not be included as authors.
- Authors should be transparent when chatbots are used and provide information about how they were used. Since the field is evolving quickly at present, authors using a chatbot to help them write a paper should declare this fact and provide full technical specifications of the chatbot used (name, version, model, source) and method of application in the paper they are submitting (query structure, syntax). This is consistent with the ICMJE recommendation of acknowledging writing assistance.22
- Authors are responsible for the work performed by a chatbot in their paper (including the accuracy of what is presented, and the absence of plagiarism) and for appropriate attribution of all sources (including for material produced by the chatbot). Human authors of articles written with the help of a chatbot are responsible for the contributions made by chatbots, including their accuracy. They must be able to assert that there is no plagiarism in their paper, including in-text produced by the chatbot. Human authors must ensure there is appropriate attribution of all quoted material, including full citations. They should declare the specific query function used with the chatbot. Authors will need to seek and cite the sources that support the chatbot’s statements. Since a chatbot may be designed to omit sources that oppose viewpoints expressed in its output, it is the authors’ duty to find, review and include such counterviews in their articles.
- Editors need appropriate tools to help them detect content generated or altered by AI and these tools must be available regardless of their ability to pay. Many medical journal editors use manuscript evaluation approaches from the 20thcentury but now find themselves face-to-face with AI innovations and industries from the 21stcentury, including manipulated plagiarized text and images and paper mill-generated documents. They have already been at a disadvantage when trying to sort the legitimate from the fabricated, and chatbots such as ChatGPT take this challenge to a new level. Editors need access to tools that will help them evaluate content efficiently and accurately. Publishers working through STM are already developing such tools.23 Such tools should be made available to editors regardless of ability to pay for them, for the good of science and the public. Facilitating their use through incorporation into open-source publishing software such as Public Knowledge Project’s Open Journal Systems,24 and education about the use and interpretation of screening outputs, would make automated screening of manuscript submissions a much-needed reality for many editors.
References
- What is a chatbot? Oracle Cloud Infrastructure. Accessed January 18, 2023. https://www.oracle.com/chatbots/what-is-a-chatbot/
- Newman J. ChatGPT? Stable diffusion? Generative AI jargon, explained. Fast Company. December 26, 2022.Accessed January 18, 2023. https://www.fastcompany.com/90826308/chatgpt-stable-diffusion-generative-ai-jargon-explained
- Marr B. How Will ChatGPT affect your job if you work in advertising and marketing? Forbes. January 17, 2023. Accessed January 18, 2023. https://www.forbes.com/sites/bernardmarr/2023/01/17/how-will-chatgpt-affect-your-job-if-you-work-in-advertising-and-marketing/?sh=241ef86c39a3
- Naughton J. The ChatGPT bot is causing panic now – but it’ll soon be as mundane a tool as Excel. The Guardian. January 7, 2023. Accessed January 18, 2023. https://www.theguardian.com/commentisfree/2023/jan/07/chatgpt-bot-excel-ai-chatbot-tech
- Roose K. Don’t Ban ChatGPT in Schools. Teach With It. NYTimes. January 12, 2023. Accessed January 18, 2023. https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html
- Hern A. AI bot ChatGPT stuns academics with essay-writing skills and usability. The Guardian. December 4, 2022. Accessed January 18, 2023. https://www.theguardian.com/technology/2022/dec/04/ai-bot-chatgpt-stuns-academics-with-essay-writing-skills-and-usability
- Huang K. Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach. NYTimes. January 16, 2023.Accessed January 18, 2023. https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html
- ChatGPT. Open AI. Accessed January 18, 2022.https://openai.com/blog/chatgpt/
- Roose K. The Brilliance and Weirdness of ChatGPT. NYTImes. December 5, 2022. Accessed January 18, 2023. https://www.nytimes.com/2022/12/05/technology/chatgpt-ai-twitter.html
- Vallance C. Google engineer says Lamda AI system may have its own feelings. BBC News. June 13, 2022. Accessed January 18, 2023. https://www.bbc.co.uk/news/technology-61784011
- Blanco-Gonzalez A, Cabezon A, Seco-Gonzalez A, et al. The role of AI in drug discovery: challenges, opportunities, and strategies. arXiv 2022. Accessed January 18, 2023.[preprint]. https://doi.org/10.48550/arxiv.2212.08104. https://arxiv.org/abs/2212.08104
- Davis P. Did ChatGPT Just Lie To Me? The Scholarly Kitchen. January 13, 2023. Accessed January 18, 2023. https://scholarlykitchen.sspnet.org/2023/01/13/did-chatgpt-just-lie-to-me/
- Terms of use. OpenAI. December 13, 2022. Accessed January 18, 2023. https://openai.com/terms/
- O’Connor S, ChatGPT. Open artificial intelligence platforms in nursing education: tools for academic progress or abuse? Nurse Educ Pract. 2023;66:103537. doi: 10.1016/j.nepr.2022.103537
- ChatGPT Generative Pre-trained Transformer; Zhavoronkov A. Rapamycin in the context of Pascal’s Wager: generative pre-trained transformer perspective. Oncoscience. 2022;9:82-84. doi: 10.18632/oncoscience.571
- Call for case reports contest written with the assistance of chatGPT. Cureus. January 17, 2023. Accessed January 20, 2023. https://www.cureus.com/newsroom/news/164
- Else H. Abstracts written by ChatGPT fool scientists. Nature 613, 423 (2023). Accessed January 18, 2023.https://www.nature.com/articles/d41586-023-00056-7
- DALL-E 2. OpenAI. Accessed January 20, 2023. https://openai.com/dall-e-2/
- Imagen. Google. Accessed January 20, 2023. https://imagen.research.google/
- Mishkin P, Ahmad L, Brundage M, Krueger G, Sastry G. DALL·E 2 preview – risks and limitations. Github. 2022. Accessed January 20, 2023. https://github.com/openai/dalle-2-preview/blob/main/system-card.md
- Who is an author? Defining the role of authors and contributors. ICMJE. Accessed January 18, 2023. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html
- Non-author contributors, defining the role of authors and contributors. ICMJE. Accessed January 18, 2023. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html
- STM integrity hub. STM. Accessed January 18, 2023. https://www.stm-assoc.org/stm-integrity-hub/.
- Open Journal Systems. Public Knowledge Project. Accessed January 18, 2023. https://pkp.sfu.ca/software/ojs/
Source: Zielinski C, Winker M, Aggarwal R, Ferris L, Heinemann M, Lapeña JF, Pai S, Ing E, Citrome L for the WAME Board. Chatbots, ChatGPT, and Scholarly Manuscripts: WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications. WAME. January 20, 2023. https://wame.org/page3.php?id=106
by Team GHEP | Nov 21, 2021 | News
A comprehensive study of health and socioeconomic inequalities among American Indians and Alaska Natives (AIANs) in the United States published in GHEP’s International Journal of Translational Medical Research and Public Health (IJTMRPH) reports shocking data that AIAN populations and tribal communities continue to experience disproportionately high rates of violence, injuries, youth suicide, obesity, smoking, poverty, unemployment, disability, diabetes, kidney disease, heart disease, hypertension, depression and psychological distress, liver cirrhosis and alcohol-related mortality, poor overall health, infant and premature mortality, and lower life expectancy, compared to the majority White population as well as the general population of the United States.
The study authored by researchers from the Health Resources and Services Administration (HRSA), an agency within the US Department of Health and Human Services, emphasized the need for addressing inequities in social determinants as a key policy strategy for tackling health inequalities among AIANs and other racial/ethnic groups in the United States.
Specifically, the seminal study which rigorously examined different aspects of health and socioeconomic status reported the following key findings:
- In 2019, life expectancy at birth was 76.9 years for AIANs, significantly lower than that for Asian/Pacific Islanders (88.2), Hispanics (83.7), non-Hispanic Whites (79.1), and slightly higher than the life expectancy of African Americans (76.2);
- The infant mortality rate for AIANs was 8.7 per 1,000 live births, 79% higher than the rate for non-Hispanic Whites and 114% higher than the rate for Asian/Pacific Islanders;
- High rates of mortality among AIANs, particularly in rural areas, were found for working ages, diabetes, liver cirrhosis, alcohol-related causes, youth suicide, and unintentional injuries;
- About 10% of AIAN adults experienced serious psychological distress, at two-to-five times higher rate than other racial and ethnic groups in the US; and
- AIANs had the highest overall disability, mental and ambulatory disability, health uninsurance, unemployment, and poverty rates in the US, with poverty rates for some AIAN tribes approaching or exceeding 40%.
For additional information, please contact the study’s lead author, Dr. Gopal K. Singh, PhD, of the Health Resources and Services Administration, U.S. Department of Health and Human Services (email: [email protected]).
by Team GHEP | Feb 25, 2020 | Archive, Blog & News, News
The International Journal of Translational Medical Research and Public Health (IJTMRPH), Washington, DC, USA, is sponsoring a special journal collection of articles on “Human Resources for Health (HRH) in Asia: Current and Emerging Issues.” The special collection will showcase emerging scientific innovations in the field of global human resources for health (HRH) in the Asian continent. We also welcome papers sharing studies or lessons learned in HRH from across the world that may be pertinent to Asia.
Submission Deadline: June 5, 2020 (Early submission is encouraged).
Guest Editor:
Professor Shiv Chandra Mathur, MBBS, MD
Independent Public Health Consultant
Former Professor and Head, Community Medicine Department, Government Medical College, Bhilwara 301011, India
Former Chair, Asia-Pacific Action Alliance on Human Resources for Health
Questions & Inquiries:
For more information, questions, or inquiries, please contact: [email protected]
Download the Call for Papers
Download PDF
Download JPEG
by Team GHEP | Aug 17, 2019 | Blog & News, News
Although international students come to the U.S. to improve their academic and social status through graduate education, they are at increased risk of experiencing social isolation and loneliness which are damaging to their physical and mental health.
A new study published in the International Journal of Translational Medical Research and Public Health showed that loneliness and social isolation greatly impact an individual’s mental and physical health, particularly those of international students at the university level.
New research studies across the world are linking loneliness and social isolation to both increased morbidity and premature mortality, making them major public health problems, but the new study is the first to explore this phenomenon among foreign graduate students at a major research university in the U.S. and across various levels of graduate education among students from different parts of the world.
According to the study authors, Dr. Mehrete Girmay and Dr. Gopal Singh, their study entitled “Social Isolation, Loneliness, and Mental and Emotional Well-being among International Students in the United States,” is one of the first attempts to comprehensively explore the short and long term effects of loneliness and social isolation among international students.
Some of the key findings of the study are:
• Social isolation and loneliness are growing public health epidemics with the potential to cause detrimental health consequences such as heart disease, high blood pressure, cognitive decline, anxiety, depression, and premature mortality;
• There is a reciprocal relationship between health-related factors and risk factors of social isolation and loneliness among international students;
• University and community support are crucial in the potential remediation of adjustment needs for the international student population in the United States; and
• Poor acculturation can have detrimental effects on students’ mental and physical health and there is a critical need for more effort to be focused on attending to both the mental and physical health needs of migrant students during their stay at the host university.
For additional information, please contact the study’s lead author, Dr. Mehrete Girmay of the Health Resources and Services Administration, US Department of Health and Human Services (email: [email protected]).
by Team GHEP | Jul 13, 2019 | News
A new study by the Editors of the International Journal of Maternal and Child Health and AIDS (IJMA) shows the first known long-term data on the consequences of maternal opioid use on physical health and developmental outcomes of children using 20 years of clinical data. The study shows that children exposed to opioids in the womb are more likely to face short-term and long-term physical and mental difficulties as they grow up.
The study published in JAMA Network OPEN by Dr. Romuladus Azuine, IJMA Editor-in-Chief and Dr. Gopal Singh, IJMA Editor, showed that for babies, exposure to opioids in the womb was associated with higher risks of fetal growth restriction and preterm birth.
According to the study, among preschool-aged children, opioid exposure was associated with increased risks of lack of expected physiological development and conduct disorder/emotional disturbance. For school-aged children, opioid exposure was associated with a higher risk of attention-deficit/hyperactivity disorder (ADHD).
The U.S. Government is making concerted efforts to identify risk factors and improve prevention strategies to reduce health effects of opioids. In fact, reducing opioid epidemic is a key policy of the Trump administration with billions of dollars budgeted to address the public health problem hitting the country.
Using decades of data from the Boston Birth Cohort, one of the longest existing cohorts in the U.S., Drs. Azuine, Singh, and colleagues found that 454 of the 8509 babies (5.3%) were exposed to opioids in the womb. There was an upward trend in Neonatal Abstinence Syndrome (NAS) over the last 15 years, ranging from a low of 12.1 per 1,000 hospital births in 2003 to a high of 32 per 1,000 births in 2016.
“We have in our hands an epidemic that bears dire risks and consequences for babies, mothers, and future generations. Regardless of who we are: program planners, policy makers, or community leaders, these findings give us enough information to act. The time to act and stop the opioid epidemic is now,” said Dr. Azuine.
LINKS AND MEDIA COVERAGE:
Azuine RE, Ji Y, Chang H, et al. Prenatal Risk Factors and Perinatal and Postnatal Outcomes Associated With Maternal Opioid Exposure in an Urban, Low-Income, Multiethnic US Population. JAMA Netw Open. Published online June 28, 20192(6):e196405. doi:10.1001/jamanetworkopen.2019.6405
Brogly S. Maternal and Child Health After Prenatal Opioid Exposure. JAMA Netw Open. Published online June 28, 20192(6):e196428. doi:10.1001/jamanetworkopen.2019.6428
Birth, Child Outcomes Associated With Moms Using Opioids During Pregnancy, JAMA Network Open, June 28, 2019.
‘Flawed’ study shows possible lasting effects from drug exposure in the womb, Boston Globe, June 28, 2019.
Opioid exposure leads to poor perinatal and postnatal outcomes, MDEdge, July 10, 2019.
Prenatal opioid exposure could bring long-term harm to kids, MedicalExpress, June 28, 2019.
Significant health risks associated with opioid use during pregnancy, The Evidence Base, July 2, 2019.
International Journal of Maternal and Child Health and AIDS (IJMA).