Activity 1
These are the unicorn companies used in Activity 1 by students.
Feedback
One of the central learning of first two modules is “technology for the sake of technology is not a good strategy”
In digital marketing, technology should only be adopted when it solves a clearly defined customer or market problem and supports a specific business objective.
Such a problem could be a customer pain point, an operational inefficiency, or a competitive gap. The business objective could be to increase customer engagement, improve conversion rates, or enhance operational efficiency, which link them to value creation and competitive advantage (how the technology improves customer experience or business capability in a way competitors cannot easily replicate) concepts from digital transformation. Furthermore, it was expected to connect their ideas/solutions to the frameworks and concepts covered in class. Also, any support of data-driven decision making showing that business models are evidence-based was highly desirable.
Example: Canva — Two Ways to Recommend Technology
Version 1: Technology-Only (What many submissions looked like)
Canva should adopt AI-powered design tools. AI is a trending technology that can generate designs automatically. Canva could also use augmented reality (AR) to let users preview designs in real-world settings. These technologies are innovative and would make Canva more advanced.
Version 2: Theory-Grounded (What was expected)
Canva’s core challenge is reducing the search cost and production cost of visual content creation for non-designers (search cost theory and transportation cost theory). Its AI-powered design assistant (“Magic Design”) directly addresses this by lowering the skill barrier, which aligns with the Technology Acceptance Model: it increases perceived usefulness (users produce professional designs without training) and perceived ease of use (drag-and-drop, one-click generation). This is not technology for its own sake — it solves a clearly defined customer pain point (the inability of small business owners and solopreneurs to afford or access professional design services).
Strategically, this creates a network effect: as more users generate and share templates, the platform’s template library grows, increasing value for all users (demand-side economies of scale). This positive feedback loop reinforces lock-in and raises switching costs, making it difficult for users to migrate to competitors. Furthermore, Canva’s freemium-to-premium conversion model is evidence-based: free users experience the core value first (information as an experience good), which reduces perceived risk and drives conversion (RACE framework: Reach via free tier, Act via template engagement, Convert via premium subscription, Engage via collaborative workspaces and brand kits).
The business objective is measurable: increase conversion rate from free to paid users (a KPI tied to CAC and CLV), not simply “be more innovative.” The technology choice is justified because it creates competitive advantage through value that competitors cannot easily replicate — Canva’s integrated ecosystem of templates, brand assets, and collaborative tools produces digital homogenization resistance rather than contributing to it.
How Do These Two Versions Differ?
| Dimension | Version 1 (Technology-Only) | Version 2 (Theory-Grounded) |
|---|---|---|
| Problem identification | None. Technology is stated without a customer or market problem. | Clearly defined: non-designers cannot afford or access professional design tools (customer pain point). |
| Theoretical grounding | Absent. No reference to any framework or concept. | Applies search cost theory, TAM, network effects, lock-in, switching costs, RACE framework, and the experience good concept. |
| Business objective | Vague (“more advanced”). | Specific and measurable: improve free-to-paid conversion rate, tied to CAC and CLV. |
| Value creation logic | Missing. No explanation of how technology creates value for customers or the firm. | Explains the mechanism: lower skill barriers increase adoption, network effects grow the template ecosystem, lock-in sustains competitive advantage. |
| Competitive advantage | Not addressed. | Explicitly argues why the advantage is difficult to replicate (integrated ecosystem, demand-side scale). |
| Evidence-based reasoning | None. | Freemium model is linked to information economics (experience good); metrics (conversion rate, CAC, CLV) ground the recommendation in data. |
| “Technology for technology’s sake” test | Fails. The recommendation reads as “AI is trending, so use it.” | Passes. Every technology choice maps to a problem, a theory, and a measurable business outcome. |
The core difference: Version 1 describes what technology to use. Version 2 explains why that technology matters — connecting it to a customer problem, established theories from the course, and a measurable business objective. As covered in lectures: “Technology must be purposeful, not performative” and “Technology changes. Economic laws do not.”
Activity 2
GA4 Metrics Reference
The following metrics from Google Analytics 4 (GA4) were relevant to this activity. Understanding what each metric measures and where it fits in the reporting context is essential for accurate interpretation.
Website Traffic
- Sessions: The total number of visits to the site. A session starts when a user visits and ends after 30 minutes of inactivity. This metric gauges the overall volume of traffic.
- Users: The number of unique users who interacted with the site. GA4 distinguishes between Total Users, Active Users, and New Users using device ID, User ID, and Google Signals for cross-device identification.
- Views: The total number of pages or app screens viewed (GA4 combines page views and screen views into a single “Views” metric). This indicates the level of engagement with the website’s content.
- Engagement Rate: The percentage of sessions that were engaged (lasted longer than 10 seconds, had 2+ page views, or triggered a key event). This is a flagship GA4 metric that replaces the traditional bounce rate as the primary engagement signal.
- Average Engagement Time: The average time the site was in active focus per user. This replaced “Average Session Duration” from Universal Analytics and more accurately reflects active user attention.
- Bounce Rate: In GA4, this is the inverse of Engagement Rate (i.e., the percentage of sessions that were not engaged).
E-Commerce Performance
- Ecommerce Purchases: The total number of completed purchase events. GA4 tracks this via the
purchaseevent. - Session Conversion Rate: The percentage of sessions that resulted in a purchase, calculated as (Ecommerce Purchases / Sessions) x 100. GA4 also offers a User Conversion Rate (percentage of users who converted). Note: GA4 does not have a single pre-built “E-commerce Conversion Rate” metric as Universal Analytics did; you derive it from Session or User conversion rates.
- Average Order Value (AOV): Calculated as Purchase Revenue / Number of Ecommerce Purchases. This indicates the average amount spent per transaction.
- Key Events (Conversions): GA4’s term for actions you mark as valuable (e.g., purchases, sign-ups, form submissions). These must be explicitly configured in GA4 settings.
Website Traffic Sources
- Default Channel Grouping: GA4 automatically classifies traffic into predefined channels based on source and medium. The main default channels include:
- Organic Search, Paid Search, Direct, Referral, Organic Social, Paid Social, Email, Display, Organic Video, Paid Video, Organic Shopping, Paid Shopping, Cross-network, and Affiliates.
- Sessions per Channel: The number of sessions generated from each traffic source.
- Revenue per Channel: The total revenue attributed to each traffic source.
Website Visitor Characteristics
- Demographics: Age and gender distribution of users. This helps in understanding the target audience.
- Geography: The geographical distribution of users. This shows where the traffic is coming from.
- Device Type: The type of devices used (desktop, mobile, tablet). This helps optimize the website for different devices.
- New vs. Returning Users: The proportion of first-time users compared to returning users. This indicates the site’s ability to both attract and retain its audience.
What could have improved
- Avoid aggregate-level only reporting
- Many submissions used only total Q3 vs total Q4 values.
- This hides important within-quarter trends.
- For managerial reporting, include time-trend views (e.g., monthly/daily movement across quarters).
- Strengthen quarter-to-quarter comparison logic
- In some reports, Q3 and Q4 were not compared consistently.
- Ensure:
- Correct date windows (Q3: Jul–Sep, Q4: Oct–Dec),
- Same metric definitions across both quarters,
- Like-for-like interpretation.
- Improve chart and report clarity
- Some visuals were missing essential labels.
- Every chart should include:
- clear title,
- labeled axes,
- date range,
- metric unit/format (%, $, count),
- brief takeaway statement.
What was done well
- Great initiative with Looker Studio
- A few students used Looker Studio beyond expectations.
- This is excellent professional practice for dashboard reporting.
- Kudos and well done on going further than the minimum requirement.
Activity 3
Text analysis of contents created by different GenAI platforms.
| Platform | Analytic | Clout | Authentic | Tone |
|---|---|---|---|---|
| ChatGPT | 88.14 | 53.85 | 33.91 | 92.78 |
| Claude | 90.67 | 63.07 | 28.00 | 91.93 |
| Gemini | 89.46 | 57.14 | 41.86 | 93.99 |
| Copilot | 84.00 | 56.47 | 29.23 | 87.59 |
| Val | 84.83 | 33.54 | 28.13 | 80.14 |
Treat AI drafts as starting points only; authenticity needs human editing (voice calibration, concrete specifics, and audience-real phrasing).
LIWC is used for performing the above analysis.
Linguistic Inquiry and Word Count (LIWC) is the gold standard in software for analyzing word use. It can be used to study a single individual, groups of people over time, or all of social media.
Similarity Across Platforms
| ChatGPT | Gemini | Copilot | Claude | Val | |
|---|---|---|---|---|---|
| ChatGPT | 0.1492 | 0.8703 | 0.9244 | 0.8849 | 0.5009 |
| Gemini | 0.8703 | 0.1462 | 0.8699 | 0.8451 | 0.4936 |
| Copilot | 0.9244 | 0.8699 | 0.1439 | 0.8666 | 0.5126 |
| Claude | 0.8849 | 0.8451 | 0.8666 | 0.1399 | 0.4637 |
| Val | 0.5009 | 0.4936 | 0.5126 | 0.4637 | 0.1273 |
Similarity scores were computed using TF-IDF Cosine Similarity. Scores range from 0 (completely different) to 1 (identical), where values above 0.85 indicate highly similar vocabulary and framing, 0.50–0.85 indicates moderate similarity, and below 0.50 indicates distinct differences. Values are symmetric and diagonal elements denote self-comparisons.
Key Insights
All AI models are highly similar to each other (0.85–0.92), suggesting modern LLMs converge on similar vocabulary, tone, and framing when given the same task
ChatGPT ↔︎ Copilot (0.9244) is the strongest pairing — likely because both are built on OpenAI’s underlying models
Val scores lowest across all comparisons (~0.46–0.51), confirming that older/simpler AI models generate noticeably different language compared to modern LLMs
Claude is the most distinct among modern LLMs — it scores the lowest against Val (0.4637) and slightly lower against peers, suggesting it has a more differentiated writing style.
Copilot is the most “central” modern LLM — it scores highest in similarity against ChatGPT, Gemini, and Val, meaning its outputs are closest to the group average
The gap between Val and modern LLMs (~0.50) vs. modern LLMs with each other (~0.88) is substantial, highlighting a clear generational divide in language generation capability and style.
Modern LLMs produce strikingly similar outputs (0.85–0.92 similarity), revealing a landscape of algorithmic convergence where AI-generated content risks blending into undifferentiated marketing noise. Even older AI models (Val, ~0.46–0.51) diverge more in style than modern LLMs do from each other. Brands relying solely on AI content forfeit the human differentiator — the creative intuition, emotional intelligence, and cultural nuance that forge authentic connections and truly distinctive brand voices in a crowded digital marketplace.
Overall, a great effort on your use of AI for content marketing activity. To improve further, focus on treating prompting as a strategy task, not just a writing task. The best prompts are clear, purposeful, and tied to the outcome you want.
What you did well
- You gave the model a clear task context.
- You compared outputs across multiple platforms.
- You showed awareness of tone, audience, and brand fit. (a few of you)
How to strengthen your prompt quality
- Start with a precise objective
- State exactly what success looks like (e.g., awareness, engagement, premium positioning).
- Add relevant context only
- Include audience, product facts, and channel—but avoid unnecessary detail that distracts the model.
- Specify output constraints
- Guide reasoning, not just output
- Ask the model to consider all key details before writing, and to check alignment with your brief.
- Use one prompt = one core goal
- Avoid mixing too many instructions in one request.
- Refine iteratively
- Draft → test → adjust. Small changes in wording can significantly improve quality.
Suggested prompt structure for future tasks
- Role (who the model is acting as)
- Task (what to produce)
- Context (brand, audience, product facts)
- Constraints (length, tone, format, prohibited elements)
- Quality check (accuracy, relevance, consistency before final output)
Worked Example: Applying the Prompt Structure to the Activity Scenario
The activity scenario asks you to act as a social media marketing manager of Premier Group launching Distant South Sparkling from Tasmania, a wine recognised among the 13 best-value wines from the 2023 Decoded Wine Awards by Dan Murphy’s.
A weak prompt (unstructured):
Write a social media post for Distant South Sparkling wine. Make it engaging and fun. Use hashtags.
This prompt lacks a defined role, audience, business objective, platform constraints, and any quality guardrail. The output will be generic and difficult to evaluate against a marketing goal.
A strong prompt (following the 5-part structure):
1. Role: You are an experienced social media copywriter for Premier Group, a wine and spirits retailer. You write in a tone that is approachable and knowledgeable — confident but never pretentious. You understand that your audience is buying wine for enjoyment, not to be lectured.
2. Task: Write one Instagram caption to announce the launch of Distant South Sparkling from Tasmania. The caption should drive awareness of the product and encourage in-store or online purchase. Include a clear call to action.
3. Context:
- Product: Distant South Sparkling NV, a Chardonnay and Pinot Noir blend from Tasmania by Josef Chromy. Tasting notes: fine persistent bead, aromas of apple, almond, and honeycomb with crisp apple, lemon curd, and lychee on the palate. Creamy texture.
- Price point: ~$20.90 (positioned as premium-accessible, not budget).
- Credential: Named one of the 13 best-value wines at the 2023 Dan Murphy’s Decoded Wine Awards.
- Audience: 25–45 year olds, metro-based, wine-curious but not wine snobs. They value quality at a fair price. They are likely browsing Instagram in the evening or on weekends.
- Retailer: Premier Group — locally owned stores with a reputation for curated selection.
4. Constraints:
- Length: 60–100 words (Instagram optimal).
- Tone: Warm, celebratory, conversational. Not formal or overly poetic.
- Format: Caption only (no image description needed). Include one clear CTA (e.g., “Shop now,” “Find it in store”).
- Hashtags: Maximum 3, placed at the end and relevant to wine discovery (not generic like #love or #instagood). Note: if adapting this post for Twitter/X, do not use hashtags or include links in the main post — the Twitter/X algorithm deprioritises hashtags and penalises posts with links.
- Prohibited: Do not use the phrase “hidden gem.” Do not reference alcohol as a coping mechanism. Comply with responsible drinking messaging.
5. Quality check: Before producing the final output, verify that the caption: (a) mentions the award credential accurately (2023 Decoded Wine Awards, Dan Murphy’s), (b) includes at least one specific tasting note to differentiate from generic wine copy, (c) has a clear CTA, (d) stays within word and hashtag limits, and (e) matches the tone brief (approachable, not pretentious).
Why this matters: The weak prompt delegates all decision-making to the model. The strong prompt retains strategic control — the marketer decides the objective (awareness + purchase), the audience (wine-curious 25–45), the positioning (premium-accessible, award-winning), and the constraints (platform-appropriate length, responsible messaging). The model’s job is execution within a well-defined brief, not guessing what the marketer wants.
This is the difference between using AI as a “content vending machine” and using it as a “strategic execution partner.” The prompt structure ensures that every output can be evaluated against the brief, iterated on with purpose, and aligned with a measurable marketing outcome.
Sample Outputs from ChatGPT
The following outputs were generated by ChatGPT using the two prompts above. Compare them side by side.
Output from the General Prompt:
Prompt used: “Write a social media post for Distant South Sparkling wine. Make it engaging and fun. Use hashtags.”
ChatGPT output:
Raise your glass to something truly special! Distant South Sparkling is here to make every moment sparkle. Whether you’re celebrating a big win or just treating yourself on a Friday night, this Tasmanian beauty has got you covered. Crisp, refreshing, and absolutely divine — it’s the perfect sip for any occasion. Pop the cork and let the good times flow! #SparklingWine #TasmanianWine #CheersToTheWeekend #WineLovers #BubblesAndBeyond #DrinkBetter #WineTime
Output from the Specific Prompt:
Prompt used: The 5-part structured prompt (Role, Task, Context, Constraints, Quality check) shown above.
ChatGPT output:
Tasmanian bubbles that punch well above their price tag. Distant South Sparkling is a Chardonnay–Pinot Noir blend with crisp apple, lemon curd, and a creamy finish that keeps you coming back. Named one of the best-value wines at the 2023 Dan Murphy’s Decoded Wine Awards — and at around $21, it’s an easy yes. Now available at Premier Group stores and online. Grab a bottle before the weekend does.
#TasmanianWine #SparklingWine #DecodedWineAwards
What the Specific Prompt Improved
| Dimension | General Prompt Output | Specific Prompt Output |
|---|---|---|
| Tone | Generic excitement (“truly special,” “absolutely divine”) that could describe any wine or product. | Conversational and confident without being pretentious (“punch well above their price tag,” “easy yes”). Matches the brief. |
| Product specificity | No tasting notes, no grape varieties, no origin detail beyond “Tasmanian.” Interchangeable with any sparkling wine. | Names the blend (Chardonnay–Pinot Noir), includes specific tasting notes (crisp apple, lemon curd, creamy finish), and states the price point (~$21). |
| Award credential | Not mentioned at all. The model had no context to include it. | Accurately cited: “2023 Dan Murphy’s Decoded Wine Awards” as best-value recognition. |
| Call to action | Absent. “Pop the cork” is a mood statement, not a purchase driver. | Clear and actionable: “Now available at Premier Group stores and online. Grab a bottle before the weekend does.” |
| Hashtags | 7 generic hashtags (#WineTime, #CheersToTheWeekend, #BubblesAndBeyond) that add no discovery value and signal low-effort content. | 3 relevant hashtags placed at the end, aligned with wine discovery. |
| Word count | ~70 words of filler. High word-to-value ratio. | ~80 words, each serving a purpose (positioning, proof point, tasting note, CTA). |
| Brand alignment | No mention of Premier Group. Could be posted by anyone. | Explicitly ties the post to Premier Group as the retailer, reinforcing brand association. |
| Responsible messaging | “Pop the cork and let the good times flow” borders on encouraging excess. | Neutral, product-focused language. No problematic framing. |
The general prompt produced content that is enthusiastic but empty — it could describe any sparkling wine from any brand on any platform. The specific prompt produced content that is strategically useful: it positions the product (premium-accessible), provides proof (award), differentiates (tasting notes), drives action (CTA with retailer name), and respects platform constraints (hashtag count). The difference is not just quality of writing — it is whether the output is usable as marketing collateral without significant human rework.
Key takeaway: Strong prompting is about intentional decision-making. A prompt should not be long for the sake of being long; it should be structured to help the model produce output that is accurate, audience-fit, and aligned with your marketing objective.
Sources:
- A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications
- Prompt Engineering a Prompt Engineer
To conclude, the field is moving from prompt engineering to context engineering: success now depends less on clever wording and more on providing the right context (task goal, constraints, examples, prior state, and relevant information) so the model can produce reliable outputs. In short, quality comes from designing the full decision context, not just the prompt sentence.
Activity 4
Activity 4 was done well overall—you understood the workflow and applied the agent + digital twin concept appropriately.
Strong point: You correctly treated Phase 1 (synthetic customer generation) as a critical foundation step, not just data creation. - This matters because the quality of synthetic personas determines how closely the digital twins can represent real customer behavior.
- A few submissions did not show this step.
- If this step is not right, i.e., your simulation is not representative of the real world, you have to repeat it (either by regenerating the data or improving the instruction/context).
- Why this is important: If synthetic profiles are unrealistic or poorly distributed, the simulation in later phases may look precise but still be misleading. So your attention to this step was a strength.
Also done well: In Phase 2, you showed the right understanding that agent outputs depend heavily on instruction quality.
- Key feedback going forward: Keep in mind that the objective (e.g., balancing revenue, churn, and visit frequency) is a human business goal, not the agent’s goal by default. The agent will only optimize what we explicitly define. The objective (e.g., trade-off between sales impact and visits) is a human business objective, not something the agent infers automatically. The agent only optimizes what we clearly define.
- In this activity, the default expectation from basic demand theory is: when price increases, demand tends to decrease. However, a few submissions showed an overall increase in sales value after the cappuccino price rose from $4.50 to $5.50. When results appear to conflict with theory, the right next step is to investigate segment-level behaviour, not stop at the aggregate result. In this simulation, one plausible explanation is that some customer digital twin segments (especially Loyal Executives) are less price-sensitive, so reduced visits were small relative to the higher unit price. Another explanation is perceived value: if customers interpret the higher price as a signal of quality, demand may hold better than expected. So the key is to reconcile theory with evidence by drilling into the four segments and explaining why each segment reacted the way it did.
Segmentation-level analysis: Since your digital twins include four distinct segments (Loyal Executive, Price-Conscious Student, Quality-Focused Gig Worker, Social Connector), analysis could have incorporated segment level insights. A few submissions did this. This level of analysis aligns with a customer-centric framework: different customer groups respond differently, so decisions (pricing, offers, communication, retention actions) should be tailored by segment that is most valuable to the firm.
Human-in-the-loop check is essential: Do not accept agentic AI output at face value—validate it against business theory, segment logic, and managerial context before drawing conclusions.
Illustrative Example: Two Possible Outcomes of a Price Increase
The cappuccino price increases from $4.50 to $5.50 (a 22.2% increase). The digital twin simulation produces visit-level data for 20 synthetic customers across four segments. Below are two cases that could emerge depending on how the agent models segment-level price sensitivity.
Case 1: Sales Decrease (Consistent with Demand Theory)
In this scenario, the visit reduction across price-sensitive segments is large enough that even the higher unit price cannot compensate.
| Visits/Week (Old) | Visits/Week (New) | Visit Change | Sales/Week (Old) | Sales/Week (New) | Sales Change | |
|---|---|---|---|---|---|---|
| Loyal Executive (n=6) | 30.00 | 28.00 | -6.7% | $135.00 | $154.00 | +14.1% |
| Price-Conscious Student (n=5) | 9.00 | 2.50 | -72.2% | $40.50 | $13.75 | -66.0% |
| Quality-Focused Gig Worker (n=4) | 4.00 | 1.50 | -62.5% | $18.00 | $8.25 | -54.2% |
| Social Connector (n=5) | 3.00 | 1.50 | -50.0% | $13.50 | $8.25 | -38.9% |
| Total | 46.00 | 33.50 | -27.2% | $207.00 | $184.25 | -11.0% |
This outcome is consistent with basic demand theory: a price increase leads to a decrease in quantity demanded, and total revenue falls because the lost volume outweighs the higher per-unit price. At the aggregate level, both visits (-27.2%) and sales (-11.0%) decline. The result requires no further reconciliation with theory — it behaves as expected.
Case 2: Sales Increase (Appears to Contradict Demand Theory)
In this scenario — which reflects the actual simulation output from the data — visits decline but sales increase because Loyal Executives maintain their visits entirely while other segments reduce modestly.
| Visits/Week (Old) | Visits/Week (New) | Visit Change | Sales/Week (Old) | Sales/Week (New) | Sales Change | |
|---|---|---|---|---|---|---|
| Loyal Executive (n=6) | 30.00 | 30.00 | 0.0% | $135.00 | $165.00 | +22.2% |
| Price-Conscious Student (n=5) | 9.00 | 3.50 | -61.1% | $40.50 | $19.25 | -52.5% |
| Quality-Focused Gig Worker (n=4) | 4.00 | 2.50 | -37.5% | $18.00 | $13.75 | -23.6% |
| Social Connector (n=5) | 3.00 | 2.50 | -16.7% | $13.50 | $13.75 | +1.9% |
| Total | 46.00 | 38.50 | -16.3% | $207.00 | $211.75 | +2.3% |
At the aggregate level, this looks like a paradox: visits fall by 16.3% yet sales rise by 2.3%. Stopping here and reporting “the price increase was beneficial” would be misleading. The segment-level decomposition reveals the mechanism.
Segment-Level Analysis
| Segment | Price Sensitivity | Visit Behaviour | Sales Impact | Explanation |
|---|---|---|---|---|
| Loyal Executive | Very low | No change (30 → 30) | +$30.00/wk (+22.2%) | High income, strong routine, values consistency over cost. This segment alone drives the entire aggregate sales increase. |
| Price-Conscious Student | Very high | Steep drop (9 → 3.5) | -$21.25/wk (-52.5%) | Budget-constrained; a 22% price hike triggers substitution (home brewing, campus café). Largest absolute visit loss. |
| Quality-Focused Gig Worker | Moderate | Reduction (4 → 2.5) | -$4.25/wk (-23.6%) | Values quality but has variable income; trims visits rather than leaving entirely. Evaluating alternatives. |
| Social Connector | Low (socially mediated) | Slight drop (3 → 2.5) | +$0.25/wk (+1.9%) | Decision driven by group behaviour, not price. Visit reduction is marginal; higher unit price nearly offsets it. |
The aggregate sales increase of $4.75/week is entirely driven by Loyal Executives (+$30.00), which masks a combined loss of -$25.25 from the other three segments. This is a composition effect: the highest-volume, lowest-sensitivity segment dominates the aggregate. The managerial implication is that the price increase is profitable only because the firm retains its most valuable segment. However, it accelerates churn in the Price-Conscious Student segment (from 9 to 3.5 visits/week), which represents a long-term customer acquisition risk. A segment-aware strategy might pair the price increase with a student loyalty discount or bundle offer to mitigate this attrition while preserving the margin gains from Loyal Executives.