
The rapid advancement of generative artificial intelligence sparked by the release of OpenAI's ChatGPT has highlighted an array of ethical concerns surrounding the development and deployment of AI systems. From biased hiring tools to the misuse of personal data, ensuring the alignment of AI systems with ethical principles has become a critical challenge. This paper explores key ethical pillars of AI development, including fairness and bias mitigation, transparency and explainability, privacy and data protection and accountability and governance. Using our development of AI systems here at Bayezian as a case study, we demonstrate how adherence to frameworks such as the EU AI Act, the inclusion of diverse development teams, and the implementation of human-in-the-loop processes can help with the creation of AI systems that are both innovative and ethically sound. By fusing legal compliance with societal values and technical best practices, we argue that AI can be a tool to benefit users, organizations, and society as a whole.