10.1 AI Ethics & Responsible Data Usage Policy
- Fair, Transparent & Secure AI Solutions

Effective Date: January 1, 2025
Last Updated: January 1, 2025

1. Introduction
1.1 Net Onboard Sdn Bhd is committed to ensuring that Artificial Intelligence (AI) technologies and automated systems are developed, deployed, and used ethically, transparently, and responsibly.
1.2 This AI Ethics & Responsible Data Usage Policy governs:
– The ethical principles guiding AI development, deployment, and application.
– Fairness, accountability, transparency, and data protection in AI-driven systems.
– Compliance with AI-related legal frameworks, cybersecurity regulations, and human rights standards.
1.3 This policy applies to:
– All AI-powered technologies, machine learning models, and automated decision-making systems used by Net Onboard.
– All employees, data scientists, developers, customers, and business partners involved in AI solutions.
1.4 This policy aligns with:
– Malaysia’s Personal Data Protection Act (PDPA) 2010
– General Data Protection Regulation (GDPR)
– ISO/IEC 42001 AI Management System Standards
– AI Ethics Guidelines by the European Commission and OECD Principles on AI

2. Ethical Principles for AI Development & Deployment
2.1 Fairness & Non-Discrimination:
– AI algorithms must be free from bias and discrimination based on race, gender, religion, or personal attributes.
– AI models will undergo bias testing and fairness evaluations before deployment.
2.2 Transparency & Explainability:
– AI systems must provide clear explanations for automated decisions affecting customers and stakeholders.
– Customers have the right to request explanations on how AI models impact their services or data.
2.3 Accountability & Human Oversight:
– AI decision-making must include human oversight to prevent harm or unintended consequences.
– Developers and stakeholders are responsible for ensuring ethical AI practices and accountability.
2.4 Privacy & Data Security:
– AI models must be designed to comply with data protection laws and prioritize user privacy.
– Data used for AI training must be collected, stored, and processed securely with encryption.

3. Responsible Data Usage in AI Systems
3.1 Data Collection & Consent Requirements:
– AI-driven data collection must adhere to Malaysia’s PDPA and GDPR regulations.
– Users must provide explicit consent before their data is used for AI-powered personalization or analysis.
3.2 Prohibited Data Usage in AI Models:
– Net Onboard prohibits AI models from processing:
– Sensitive personal data (e.g., health, biometric, financial data) without proper safeguards.
– Data obtained without consent or legal authorization.
3.3 Data Retention & Deletion Policies:
– AI-generated data will be retained only as long as necessary for its intended use.
– Users can request the removal of their AI-processed data under the “Right to be Forgotten” principle.

4. AI Security & Cyber Threat Mitigation
4.1 AI Cybersecurity Risk Management:
– AI systems must be secured against hacking, manipulation, and adversarial attacks.
– Security teams must conduct penetration testing and vulnerability assessments on AI models.
4.2 Preventing AI Misuse & Unauthorized Applications:
– AI technologies must not be used for unethical activities, misinformation, or surveillance.
– Employees and developers must report AI misuse or security vulnerabilities immediately.

5. AI in Business Decision-Making & Customer Services
5.1 Ethical AI in Customer Interactions:
– AI-powered chatbots, recommendation engines, and automation tools must be clearly identified as AI-driven.
– Customers must have the option to interact with human representatives instead of AI when making service requests.
5.2 Limitations & Accuracy of AI-Generated Information:
– AI-generated content, insights, or decisions must be reviewed and validated by human experts.
– If an AI recommendation may impact financial, medical, or legal outcomes, a human decision-maker must verify its accuracy.

6. AI Compliance Audits & Performance Monitoring
6.1 Annual AI Ethics & Compliance Review:
– AI models undergo yearly ethical audits and risk assessments to ensure compliance.
– External auditors may be engaged to validate AI fairness, bias detection, and security.
6.2 User Feedback & AI Transparency Reports:
– Customers can submit AI-related concerns or complaints to AI governance officers.
– Net Onboard will publish AI Transparency Reports summarizing model performance and ethical considerations.

7. Enforcement & Consequences for AI Misuse
7.1 Policy Violation Consequences:
– Employees or vendors who misuse AI will face disciplinary action, termination, or legal penalties.
– Customers or third parties using Net Onboard’s AI tools for illegal or unethical purposes will have their access revoked.
7.2 Legal Liability & Compliance Breaches:
– Any AI-driven data breach, discrimination, or unethical decision-making incident will be investigated and reported to relevant authorities.

8. Governing Law & Dispute Resolution
8.1 This policy is governed by Malaysian law, including:
– The Personal Data Protection Act (PDPA) 2010
– The Communications and Multimedia Act 1998
– International AI and cybersecurity guidelines
8.2 Disputes related to AI ethics will be resolved through mediation before arbitration or litigation.

9. Amendments & Updates
9.1 Net Onboard reserves the right to update this AI Ethics & Responsible Data Usage Policy at any time.
9.2 Users, employees, and partners will be notified of material changes via email or official announcements.
For AI compliance inquiries, contact [email protected].