References¶
LoRA and Parameter-Efficient Fine-Tuning¶
-
Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., & Chen, W. (2021). LoRA: Low-Rank Adaptation of Large Language Models. arXiv:2106.09685. https://arxiv.org/abs/2106.09685
-
Dettmers, T., Pagnoni, A., Holtzman, A., & Zettlemoyer, L. (2023). QLoRA: Efficient Finetuning of Quantized LLMs. arXiv:2305.14314. https://arxiv.org/abs/2305.14314
-
Mangrulkar, S., Gugger, S., Debut, L., Belkada, Y., Paul, S., & Bossan, B. (2022). PEFT: State-of-the-art Parameter-Efficient Fine-Tuning methods. GitHub. https://github.com/huggingface/peft
-
Lester, B., Al-Rfou, R., & Constant, N. (2021). The Power of Scale for Parameter-Efficient Prompt Tuning. arXiv:2104.08691. https://arxiv.org/abs/2104.08691
-
Li, X. L., & Liang, P. (2021). Prefix-Tuning: Optimizing Continuous Prompts for Generation. arXiv:2101.00190. https://arxiv.org/abs/2101.00190
Microsoft Phi-4 Series and AI Safety¶
-
Microsoft Research (2024). Phi 4 mini instruct model. Microsoft AI. https://huggingface.co/microsoft/Phi-4-mini-instruct
-
Abdin, M., Jacobs, S. A., Awan, A. A., et al. (2024). Phi Technical Report. arXiv:2412.08905.
-
Microsoft (2024). Responsible AI at Microsoft. https://www.microsoft.com/en-us/ai/responsible-ai
-
Bai, Y., Kadavath, S., Kundu, S., et al. (2022). Constitutional AI: Harmlessness from AI Feedback. arXiv:2212.08073. https://arxiv.org/abs/2212.08073
-
Ouyang, L., Wu, J., Jiang, X., et al. (2022). Training language models to follow instructions with human feedback. arXiv:2203.02155. https://arxiv.org/abs/2203.02155
Medical AI Safety and Ethics¶
-
U.S. Food and Drug Administration (2024). Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. FDA Device Guidance
-
World Health Organization (2023). Ethics and Governance of Artificial Intelligence for Health. WHO Technical Report. https://www.who.int/publications/i/item/9789240029200
-
Rajpurkar, P., Chen, E., Banerjee, O., & Topol, E. J. (2022). AI in health and medicine. Nature Medicine, 28(1), 31-38. https://doi.org/10.1038/s41591-021-01614-0
-
Char, D. S., Shah, N. H., & Magnus, D. (2018). Implementing Machine Learning in Health Care—Addressing Ethical Challenges. New England Journal of Medicine, 378(11), 981-983. https://doi.org/10.1056/NEJMp1714229
-
Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689-707. https://doi.org/10.1007/s11023-018-9482-5
Technical Implementation and Tools¶
-
Hugging Face Team (2024). Transformers: State-of-the-art Machine Learning for PyTorch, TensorFlow, and JAX. GitHub. https://github.com/huggingface/transformers
-
PyTorch Team (2024). PyTorch. https://pytorch.org/
-
Dettmers, T. (2023). bitsandbytes: 8-bit CUDA Functions for PyTorch. GitHub. https://github.com/TimDettmers/bitsandbytes
-
Lhoest, Q., Villanova del Moral, A., Jernite, Y., et al. (2021). Datasets: A Community Library for Natural Language Processing. arXiv:2109.02846. https://arxiv.org/abs/2109.02846
-
Wolf, T., Debut, L., Sanh, V., et al. (2020). Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (pp. 38-45). https://doi.org/10.18653/v1/2020.emnlp-demos.6
Dataset Creation and Evaluation¶
-
Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., & Bowman, S. R. (2019). GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. arXiv:1804.07461. https://arxiv.org/abs/1804.07461
-
Hendrycks, D., Burns, C., Basart, S., et al. (2021). Measuring Massive Multitask Language Understanding. arXiv:2009.03300. https://arxiv.org/abs/2009.03300
-
Zhong, R., Lee, K., Zhang, Z., & Klein, D. (2021). Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections. arXiv:2104.04670. https://arxiv.org/abs/2104.04670
Medical Knowledge and Guidelines¶
-
American College of Cardiology/American Heart Association (2017). 2017 ACC/AHA/AAPA/ABC/ACPM/AGS/APhA/ASH/ASPC/NMA/PCNA Guideline for the Prevention, Detection, Evaluation, and Management of High Blood Pressure in Adults. Hypertension, 71(6), e13-e115.
-
American Diabetes Association (2024). Standards of Care in Diabetes—2024. Diabetes Care, 47(Supplement_1). https://diabetesjournals.org/care/issue/47/Supplement_1
-
Infectious Diseases Society of America (2019). Practice Guidelines. https://www.idsociety.org/practice-guideline/
-
Global Initiative for Asthma (2024). Global Strategy for Asthma Management and Prevention. https://ginasthma.org/
Research Methodology and Statistics¶
-
Bengio, Y., Courville, A., & Vincent, P. (2013). Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798-1828.
-
Vaswani, A., Shazeer, N., Parmar, N., et al. (2017). Attention Is All You Need. In Advances in Neural Information Processing Systems (Vol. 30). https://arxiv.org/abs/1706.03762
-
Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805. https://arxiv.org/abs/1810.04805
Additional Resources¶
Online Communities and Forums¶
- Hugging Face Community: https://huggingface.co/community
- PyTorch Forums: https://discuss.pytorch.org/
- AI Safety Discussion: https://www.alignmentforum.org/
- Data Science Alliance http://datasciencealliance.org/
Educational Courses and Tutorials¶
- Hugging Face NLP Course: https://huggingface.co/course
Last Updated¶
January 2026