The world is witnessing a rapid evolution in artificial intelligence technologies, where algorithms are now capable of making decisions that were once the sole domain of humans. This progress raises a fundamental question: Can ethics be programmed into machines? Ethical AI demands that systems be capable of making decisions aligned with human values such as justice, safety, and fairness. However, the challenge lies in the fact that ethics are not purely computational—they are shaped by cultural, social, and temporal contexts. Many efforts have emerged to embed an “ethical framework” into AI algorithms, through concepts like "explainable AI" and "ethical learning." Yet, major challenges remain, including defining the ethical source and how algorithms handle morally ambiguous or conflicting scenarios. The future of ethical AI depends heavily on interdisciplinary collaboration between scientists, philosophers, and policymakers to develop systems that are not only accurate but also "just" and "humane."