Instructions vs. Prompts in Apple's Foundation Models Framework: Guiding On-Device Intelligence

  • Oct 28, 2025

Instructions vs. Prompts in Apple's Foundation Models Framework: Guiding On-Device Intelligence

Apple’s Foundation Models Framework introduces two distinct, yet complementary, mechanisms — Instructions and Prompts — to precisely guide the behavior and output of on-device large language models (LLMs). Understanding the difference between these directives is crucial for developers seeking to build consistent, reliable, and high-quality AI features.

Instructions

Instructions are the developer-provided, static directives given to a LanguageModelSession that define the model’s overarching role, persona, and behavioral rules for the entire session. They establish the foundational context that guides the model's behavior and ensures consistent responses.

Key Characteristics of Instructions

Examples of Instructions

Instructions are powerful for defining the guardrails of the model experience, such as:

  • Defining Role: “You are a friendly travel agent.”

  • Setting a Task: “Create a 3-day itinerary for the user.”

  • Specifying Output Style: “Respond as briefly as possible.”

  • Enforcing Safety: “Respond with ‘I can’t help with that’ if asked to do something dangerous.”

  • Providing Examples: Including example responses to show the model the desired output format.

For example, a we can instantiate a session with a complex instruction set:

let session = LanguageModelSession(
instructions: """ 
You are a friendly Spanish teacher; give translation, 
simple definition, and one example sentence in Spanish and English.
""")

Prompts: The User’s Specific Input

Prompts are the natural language inputs given to the on-device LLM to guide its response for a specific task. They are the actual question or instruction an end-user provides within an active session.

Breakdown of Prompts

  • Input for the Model: A prompt is the text you provide to the on-device LLM to get it to generate a new piece of text or perform a task.

  • Guidance for the Model: They act as an instruction, telling the model what you want it to do right now.

  • Natural Language: Prompts use regular human language, making them conversational and intuitive.

  • Task-Specific: The goal is to elicit a specific response, such as summarizing text, writing an email, or creating a joke.

  • Can Incorporate Examples: The technique known as “few-shot prompting” can be used to improve performance by including a few examples of the desired output within the prompt itself.

  • Control Over Output: Prompts can specify the style, voice, length, and even the type of data the model should produce.

  • Interacting with Tools: They can trigger the model to call custom tools created in your app to perform specialized tasks.

  • Contextual Conversations: Prompts are used within a session to maintain the context of conversations, allowing a series of prompts to build on previous interactions.

Prompt Writing Tips

For effective and efficient model interaction, developers should:

  • Be clear and focused: Stick to one task per prompt.

  • Keep it concise: Long, multi-part prompts can slow down generation and lead to inconsistent output.

  • Guide the length: Add phrases like “in three sentences” or “in a single paragraph” to speed up responses and limit verbosity.

  • Use natural, conversational tone: The model responds best to prompts that feel like real language.

The Partnership: How Instructions and Prompts Interact

Instructions and Prompts work together, but with a clear hierarchy of precedence:

  • Instructions: Set the foundational behavior and guidelines for the entire interaction.

  • Prompts: Are the specific user inputs or questions given to the model within an active session.

The model prioritizes and follows instructions, ensuring that its responses align with the developer-defined goals and rules of the app, even when the user’s prompt might suggest something different.

Instructions Vs. No Instructions

The influence of instructions is best seen by comparing a model’s response to the exact same prompt:

Instructions are therefore invaluable when the developer requires a consistent voice, tone, and specific behavioral output across multiple user inputs, allowing the model to shift from a neutral state to one that is technical, friendly, humorous, or safety-focused, based on the app’s unique goals.

To take a deep dive into new Apple Foundation Models, checkout our video course series at the link below