The development and increasing use of artificial intelligence (AI), particularly in high-risk application areas, calls for attention to the governance of AI systems. Organizations and researchers have proposed AI ethics principles, but translating principles into practice-oriented frameworks has proven difficult. This paper develops meta-requirements for organizational AI governance frameworks to help translate ethical AI principles into practice and align operations with the forthcoming European AI Act. We adopt a design science research approach. We put forward research-based premises, then we report the design method employed in an industry-academia research project. Based on these, we present seven meta-requirements for AI governance frameworks. The paper contributes to the IS research on AI governance by collating knowledge into meta-requirements and advancing a design approach to AI governance. The study underscores that governance frameworks need to incorporate the characteristics of AI, its contexts, and the different sources of requirements.