{"total":144,"offset":0,"limit":144,"data":[{"path":"/developer/tutorial","title":"Getting Started – Developer Tutorial","image":"/developer/media_1d00989ba18e942fbddc9bb108add01e153029f22.png?width=1200&format=pjpg&optimize=medium","description":"This tutorial will get you up-and-running with a new Adobe Experience Manager (AEM) project. In ten to twenty minutes, you will have created your own ...","content":"style\ncontent\n\nGetting Started – Developer Tutorial\n\nThis tutorial will get you up-and-running with a new Adobe Experience Manager (AEM) project. In ten to twenty minutes, you will have created your own site and be able to create, preview, and publish your own content, styling, and add new blocks.\n\nPrerequisites:\n\nYou have a GitHub account, and understand Git basics.\nYou understand the basics of HTML, CSS, and JavaScript.\nYou have Node/npm installed for local development.\n\nThis tutorial uses macOS, Chrome, and Visual Studio Code as the development environment and the screenshots and instructions reflect that setup. You can use a different operating system, browser, and code editor, but the UI you see and steps you must take may vary accordingly.\n\nTIP: If you would like to get your AEM project started the fastest way Adobe offers with content you already know, use the AEM Modernization Agent. Just create your boilerplate repository and then collaborate with the agent to import your site.\n\nGet started with the boilerplate repository template\nhttps://main--helix-website--adobe.aem.page/developer/videos/tutorial-step1.mp4\n\nThe fastest and easiest way to get started following AEM best practices is to create your repository using the Boilerplate GitHub repository as a template.\n\nhttps://github.com/adobe/aem-boilerplate\n\nClick the Use this template button and select Create a new repository, and select the user org that owns the repository\n\nWe recommend that the repository is set to public.\n\nThe only remaining step in GitHub is to install the AEM Code Sync GitHub App on your repository by visiting this link: https://github.com/apps/aem-code-sync/installations/new\n\n\n\n\nIn the Repository access settings of the AEM Code Sync App, make sure you select Only select Repositories (not All Repositories). Then select your newly created repository, and click Save.\n\nNote: If you are using Github Enterprise with IP filtering, you can add the following IP to the allow list: 3.227.118.73\n\nCongratulations! You have a new website running on https://<branch>--<repo>--<owner>.aem.page/ In the example above that’s https://main--mysite--aemtutorial.aem.page/\n\n\n\n\nEdit, Preview and Publish your content\n\nNavigate to Author on https://da.live/ and find the example content.\n\nEdit content and preview and publish content as needed. For more information on authoring see https://da.live/docs.\n\n\nInstall Sidekick\n\n\nTo interact with AEM as an author across environments we strongly recommend installing the Sidekick Chrome extension. Find the Chrome extension in the Chrome Web Store.\n\nAfter adding the extension to Chrome, don’t forget to pin it, this will make it easier to find it.\n\nStart developing styling and functionality\nhttps://main--helix-website--adobe.aem.page/developer/videos/tutorial-step4.mp4\n\nTo get started with development, it is easiest to install the AEM Command Line Interface (CLI) and clone your repo locally through using the following.\n\nnpm install -g @adobe/aem-cli\ngit clone https://github.com/<owner>/<repo>\n\n\n\n\nFrom there change into your project folder and start your local development environment using the following.\n\ncd <repo>\naem up\n\n\n\n\nThis opens http://localhost:3000/ and you are ready to make changes.\nA good place to start is in the blocks folder which is where most of the styling and code lives for a project. Simply make a change in a .css or .js and you should see the changes in your browser immediately.\n\nOnce you are are ready to push your changes, simply use Git to add, commit, and push and your code to your preview (https://<branch>--<repo>--<owner>.aem.page/) and production (https://<branch>--<repo>--<owner>.aem.live/) sites.\n\nThat’s it, you made it! Congrats, your first site is up and running. If you need help in the tutorial, please join our Discord channel or get in touch with us.\n\nTo get you to a live website as fast as possible, this tutorial uses Document Authoring as the content source. Universal Editor can also be configured for your project, to provide a WYSIWYG + Form-based authoring option.\n\nNOTE: Edge Delivery Services supports multiple content sources including Google Drive, Microsoft Sharepoint and AEM.\n\nNext Steps\n\nNow that your site is up and running, choose how you want to continue:\n\nBuild your first block - Create, style, and deploy a custom block from scratch.\nBuild with AI - Configure your project for AI-assisted block creation and development.\n\nPrevious\n\nBuild\n\nUp Next\n\nAnatomy of an AEM Project","lastModified":"1773252759","labs":""},{"path":"/docs/go-live-checklist","title":"Go-Live Checklist","image":"/docs/media_1da4bd7d3a1161f686fa72258c51bd49249fa142a.png?width=1200&format=pjpg&optimize=medium","description":"The go-live checklist is a summary of best practices to consider when launching a website. These steps are generally good practices but have some aspects ...","content":"style\ncontent\n\nGo-Live Checklist\n\nThe go-live checklist is a summary of best practices to consider when launching a website. These steps are generally good practices but have some aspects specific to Adobe Experience Manager.\n\nSteps Before Go-Live\nContent and Design QA\n\nMake sure that your content and design conforms to the specifications and that you are happy with the website you see on your projects .aem.live domain. This may include checks for specific accessibility and SEO requirements of your project.\n\nPerformance Validation\n\nEvery AEM project should produce a lighthouse score of 100 for mobile and desktop from Google Pagespeed insights on its respective .aem.live site.\n\nSee the document Keeping it 100, Web Performance for more information.\n\nAnalytics Validation\n\nMake sure that all your analytics setup and the rest of your martech stack is firing as expected and visitor data is visible in your reporting dashboards.\nIn any relaunch of a website the analytics instrumentation will change based on loading sequence and performance.\n\nIt is important to expect that the baseline of any metric captured by analytics will change. Contact the corresponding analysts to make sure that the adjustment of the baseline is understood and expected.\n\nMetrics that may change their baselines as reported by analytics may include pageviews, conversion rates, bounce rates, time on page, etc. Depending on the change in loading patterns the baseline of the metrics may go up and down.\nBottom of the funnel metrics like checkout, transactions or form submission that are captured by operational systems are not affected and are expected to stay flat past a lift-and-shift launch.\n\nRUM instrumentation\n\nTo be able to see performance impact quickly and reliably and to compare before / after launch metrics we recommend instrumenting your website before launch with Real Use Monitoring (RUM), ideally as early as possible. Adding RUM to your existing site is trivial and can give you important operational insights even before launch.\n\nLegacy Redirects\n\nIn most migrations there are legacy URLs that are retired. Make sure those are reflected in your redirects spreadsheet (redirects.xlsx in sharepoint or redirects in google), found in your project content folder. Check Google Search Console for the most impactful backlinks (in terms of SEO) to create redirects for.\n\nSee the document Redirects for more information.\n\nSitemap & Robots\n\nFor most websites with a significant number of pages, a sitemap is desirable. AEM automatically generates sitemaps from the query index. For multilingual sites, adding hreflang to the sitemap ensures that the website correctly targets the appropriate geographic and language audience, which is essential for SEO and prevents issues like duplicate content across different language versions (aka SEO cannibalisation) and improves the search engine's ability to serve the right version of the content to the right users.\n\nIf you have a sitemap that’s generated for your site make sure it is discoverable from your robots.txt. Note that robots.txt is (technically) case sensitive, and a good example is:\n\nUser-agent: *\nAllow: /\n\nSitemap: https://<your-domain>/sitemap.xml\n\n\nNote: aem.page and aem.live are kept hidden from crawlers intentionally, to avoid duplicate content. There is no need to set the robots.txt to Disallow crawlers during development.\n\nIt is important to understand that a Disallow in robots.txt is potentially helping with crawler budget issues under certain conditions, but does not prevent a page from being added to the google index, and conversely will make its removal via noindex impossible. URLs of pages that have a Disallow can still be discovered in a SERP.\n\nAn effective way to remove a URL from the index is to Allow crawlers and put a noindex robots tag on the page.\n\nAdding a custom robots.txt is done via config service.\n\nSee the documents Indexing and Sitemaps for more information.\n\nCanonical URLs\n\nMake sure canonical URLs return 2xx HTML response status code (not 3xx or 4xx) and that they are correctly implemented, which is crucial for preventing duplicate content issues across the site. Proper canonicalization helps search engines understand which versions of similar pages to index and display in search results, directly impacting SEO performance.\n\nSee the following external documentation for more information: Consolidate duplicate urls\n\nFavicon\n\nAdding a favicon to your site gives it a professional look in your visitor’s browsers.\n\nSee the document Favicon for more information.\n\nAuthentication for Authors\n\nBy default, authors don’t need to be logged in to use AEM Sidekick. If you decide you want to control who can preview and publish documents this can be configured.\n\nSee the document Configuring Authentication for Authors for more information.\n\nSharePoint Access\n\nIf your content is in SharePoint, follow this guide to configure dedicated access which you control.\n\nCDN Configuration\n\nOne of the last steps in a go-live is usually to update your CDN to point to your aem.live endpoint.\n\nYou can either use your existing, self-managed CDN or an Adobe-managed CDN included in your license. See here for details on supported CDN options.\n\nIdeally the CDN configuration is tested in a staging environment to make sure everything works as expected, which includes redirects from www to APEX and vice-versa.\n\nPush Invalidation Setup\n\nMake sure push invalidation is properly set up according to the document Configuring Push Invalidation for BYO Production CDN. Test the setup by publishing a small change and verifying that the change is visible on the production domain.\n\nNotify Engineering On-Call\n\nThe Edge Delivery Services team is closely monitoring all systems as part of our standard 24/7 operations. Multi-tenant incidents typically appear on status.adobe.com. If we detect irregularities affecting a single tenant, we will reach out to the affected customer to help them address the problem.\n\nBy notifying us about your go-live, you can help us pay extra attention to your project during the go-live, and we can contact you via Teams or Slack more quickly in case of any anomalies we can assist with.\n\nTo notify us, please send an email to aemgolives@adobe.com and include the following information:\n\naem.live URL(s) that will be going live on Edge Delivery Services\nProduction domain\nPlanned go-live date and time\nPrimary contact person for the go-live\nTeams or Slack channel for real-time collaboration with Engineering\nIf you do not yet have a collaboration channel set up yet, please refer to Teams Collaboration for guidance.\nPost Go-Live Validation\nPerformance Validation\n\nValidate that the performance is still at a lighthouse score of 100 via pagespeed insights on the production environment. Introducing a CDN layer can have adverse performance effects that are usually visible on the protocol layer. Typical culprits are running HTTP/1.1 or ineffective origin caching as well as bot detection or other libraries injected by the CDN configuration.\n\nGoogle Search Console\n\nIf you have an active Google Search Console with your sitemap uploaded, it may be valuable to get a coverage report and make sure that indexing works as expected. The Google Search Console should be monitored for the weeks after a go-live to track the indexing status of new and updated pages, ensuring they are properly recognized by Google. It's crucial to check for total clicks, total impressions, backlinks changes and crawl errors, as these can significantly impact the site's SEO performance and authority.\n\nMartech and Analytics validation\n\nCertain aspects of your martech stack may be tied to specific origins / hostnames, and operate differently on staging (or .page and .live) hostnames if not configured correctly. It is advisable to make sure that all the important tags in the martech stack fire correctly and the information is continuously collected after a go-live.\n\n404 Report\n\nAfter a website has been migrated there is usually a set of 404 Not Founds, which should be monitored after the go-live and redirected to popular page URLs. This information can be pulled from your site analytics and the respective Slack bot report. Monitoring this for the weeks after a go-live is recommended.\n\nPrevious\n\nLaunch\n\nUp Next\n\nBYO CDN Setup Overview","lastModified":"1758639182","labs":""},{"path":"/docs/","title":"Documentation","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Documentation hub for authors, developers and operations.","content":"Documentation\nHelpful guides for your journey with Adobe Experience Manager\nBuild\nPublish\nLaunch\nstyle\nheading\nstyle\ncontent\nBuild\nDeveloper Tutorial\n\nThis tutorial will get you up-and-running with a new project.\n\nAnatomy of a Project\n\nDiscover how a typical project should look from a code standpoint.\n\nBlock Collection\n\nA collection of blocks considered a part of the product and are recommended as blueprints for blocks in your project.\n\nSpreadsheets\n\nMicrosoft Excel workbooks and Google Sheets are translated into JSON files that can easily be consumed by your website or web application.\n\nIndexing\n\nHow to keep an index of all the published pages in a particular section of your website.\n\nKeeping it 100\n\nThe quality of the experience of consumers of your website is crucial to achieving the business goals of your website and the satisfaction of your visitors.\n\nMarkup - Sections\n\nThe markup and DOM are constructed in a way that allows flexible manipulation and styling.\n\nFavicon\n\nAdding a favicon to your site gives it a professional look in your visitor’s browsers.\n\nCustom Headers\n\nLearn how to apply custom HTTP response headers to resources, for example to allow CORS.\n\nDevelopment Collaboration and Good Practices\n\nBecome a better developer and team mate with these tips from successful AEM project leads.\n\nPublish\nAuthoring\n\nIf you use Microsoft Word or Google Docs, then you already know how to create content.\n\nAuthoring with AEM\n\nAuthoring content in AEM as a Cloud Service using the Universal Editor, you benefit from the power of AEM’s robust tool set for content management and the unparalleled performance of Edge Delivery Services.\n\nBulk Metadata\n\nIn some cases, it is useful to apply metadata en masse to a website.\n\nSlack Bot\n\nWe are available to you on Slack, and our Slack Bot can perform useful tasks.\n\nPlaceholders\n\nPlaceholders can be managed as a spreadsheet that is either in the root folder of the project or in the locales root folder in the case of a multilingual site.\n\nSitemap\n\nAutomatically generate sitemap files to be referenced from your robots.txt. This helps with SEO and the discovery of new content.\n\nLaunch\nGo Live Checklist\n\nThe go-live checklist is a summary of best practices to consider when launching a website.\n\nPush Invalidation\n\nAutomatically purge content on your production CDN, whenever an author publishes content changes.\n\nCloudflare Worker Setup\n\nLearn how to configure Cloudflare to deliver content.\n\nAkamai Setup\n\nDiscover how to use the Akamai Property Manager to configure a property to deliver content\n\nFastly Setup\n\nThis guide illustrates how to configure Fastly to deliver content.\n\nCloudFront Setup\n\nSet up Amazon Web Services Cloudfront to deliver your AEM site with push invalidation\n\nAdobe-managed CDN\n\nUse the CDN included in your Adobe Experience Manager Sites as a Cloud Service license.\n\nRedirects\n\nYou can intuitively manage redirects as a spreadsheet called redirects (or redirects.xlsx) in the root of your project folder.\n\nResources\nSidekick\n\nInstall the AEM Sidekick to author and publish with Adobe Experience Manager.\n\nFAQ\n\nAnswers to your other questions in one place.\n\nAdmin API\n\nReference documentation for the Admin API.\n\nAEM Status\n\nCheck if Adobe Experience Manager sites are available right now and confirm service level availability.\n\nArchitecture\nArchitecture Overview\n\nUnderstand the basics of AEM architecture\n\nStaging & Environments\n\nBest practices for setting up environments\n\nSecurity Overview\n\nHow Adobe is keeping your sites secure and available\n\nGlobal Availability\n\nGlobal Content Delivery Networks enable global content availability\n\nIntegration Anti-Patterns\n\nThese integration patterns frequently cause issues, avoid them\n\nAEM as a Content Source\n\nHow content is published from AEM Sites authoring to Edge Delivery Services","lastModified":"1744376865","labs":""},{"path":"/docs/redirects","title":"Redirects","image":"/docs/media_11f1e812b5708f947436b2cd918bcec9cf5e2d6b7.jpg?width=1200&format=pjpg&optimize=medium","description":"Every website has the need for redirects. For example if you relocate or delete content, you want your users to still be able to find ...","content":"style\ncontent\n\nRedirects\n\nEvery website has the need for redirects. For example if you relocate or delete content, you want your users to still be able to find it or the next best thing. See the document Authoring and Publishing Content for more information on deleting content.\n\nYou can intuitively manage redirects as a spreadsheet called redirects (or redirects.xlsx) in the root of your project folder.\n\nThe spreadsheet has to contain at least two columns titled Source and Destination.\n\nThe Source is relative to the domain of your website, so it only contains the relative path.\nThe Destination can be either a fully qualified URL if you are redirecting to a different website, or it can be a relative path if you are redirecting within your own website.\n\nAfter making changes to your redirects spreadsheet, you can preview your changes via the sidekick and have your stakeholders check that the redirects are working on your .page preview website before publishing the redirect changes to your production website. See the Sidekick documentation for more information about switching between environments.\n\nRedirects take precedence over existing content, which means that if you have an existing page with a given URL, defining a redirect for that same URL will serve the redirect for that page and “hide” the existing page. Conversely if a redirect that has been set up on an existing page is removed, the existing page will be served again, unless the page was unpublished.\n\nRemember that if your redirect workbook has multiple pages (worksheets), then the redirects will only work on the sheet that is called helix-default. This allows you to manage more complex redirects through spreadsheet formulas. The spreadsheets and JSON documentation page has all the details.\n\nWildcards Redirects\n\nWildcard redirects have a set of issues around (a) unmanaged complexity and accumulate tech debt over time, (b) introducing potential redirect loops and also tend to (c) 301 into 404. For those reasons we generally recommend avoiding patterns-based wildcard redirects, and instead create redirects based on actual \"usage\" data.\n\nHowever there are some cases where individually managed redirects are inflating the list of redirects, and for practical or perceived initial simplicity reasons it may be compelling to use pattern based redirects. For example, you might want to apply a redirect to all pages under a specific path, regardless of the exact URL. This is where using wildcards in your CDN can be helpful. Wildcards allow you to match multiple URLs under a common path, simplifying the redirection of entire sections of your site.\n\nExample: If you want to redirect all URLs under /old-path/ to /new-path/, including any subpages (e.g., /old-path/page1, /old-path/page2), you can configure a wildcard redirect in your CDN.\n\nInput URL pattern: /old-path/*\nRedirect to: /new-path/$1\n\nThe wildcard (*) captures anything after /old-path/, and $1 represents the captured part of the URL, ensuring the structure is maintained in the new location.\n\nKeep in mind that redirecting this way usually redirects the entire namespace and hence creates redirects (301) into 404 for an infinite number of URLs and may therefore lead to undesirable results.\n\nNote: The specifics of configuring these redirects depend on the CDN provider you are using. Different vendors may have their own syntax, interface, and capabilities for handling wildcards and advanced redirect rules. Always consult the documentation of your specific CDN for guidance on how to implement wildcard redirects.\n\nSite Migrations and SEO\n\nWhen migrating your site to AEM Edge Delivery Services, this may necessitate some changes to the URLs your site uses. This can have an impact on SEO, so it is important that you plan for this carefully to avoid any disruption. We recommend the following steps to handle the most common scenarios:\n\nAdd your most popular URLs to the redirects sheet to ensure these are handled properly via 301 redirects\nIf the change to URLs follows a common pattern, for example, removing .html from all urls, set this up as a wildcard redirect at your CDN\nIf the change includes more complex situations than a common pattern, best practice is to map the old and new URLs 1-to-1.\nIf you are pruning content as part of your migration, avoid the temptation to redirect these urls to a common destination such as your home page; See the “Avoid irrelevant redirects” note in Google's documentation on site moves for more information\nEnsure your sitemap is up to date\nAfter go-live, monitor 404s via Operational Telemetry and add missing redirects as needed\n\nFor more information, see Google's documentation on site moves.\n\nPrevious\n\nFavicon\n\nUp Next\n\nResources","lastModified":"1750693945","labs":""},{"path":"/docs/sidekick","title":"Using AEM Sidekick","image":"/docs/media_14c67a991f1384b43a512beca53dd8d1365ee5af8.jpg?width=1200&format=pjpg&optimize=medium","description":"AEM Sidekick provides content authors with a toolbar offering context-aware options so that you can edit, preview, and publish your content directly from the pages ...","content":"style\ncontent\nUsing AEM Sidekick\n\nAEM Sidekick provides content authors with a toolbar offering context-aware options so that you can edit, preview, and publish your content directly from the pages of your website.\n\nInstallation\n\nThe Sidekick is available for Google Chrome and chromium-based browsers. You can install it from the Chrome web store.\n\nUsing Microsoft Edge?\n\nTo install the Sidekick extension into your Microsoft Edge Browser, you need to enable the following option to \"Allow extensions from other stores\" under edge://extensions in your URL bar.\n\n\nOnce that setting is enabled the sidekick can be installed from the Chrome web store.\n\nFirst Time Use\n\nOnce installed, the Sidekick appears as a toolbar hovering over the bottom portion of your content in the browser on pages that you have authored with AEM. The Sidekick provides you with various tools and actions for navigating and publishing your content.\n\nThe Sidekick is laid out to make the most common tasks easily accessible (from left to right):\n\nDrag handle - The Sidekick defaults to the bottom of the browser, but you can use the Adobe logo to drag the toolbar to another location. This can be useful to view content that might otherwise be hidden behind it without toggling it off. The new position is not persisted and it will snap back to its original position on window resize or reload.\n\nEnvironment Switcher - Select from available modes such as production and live based on the status of your content.\n\nActions - Use these buttons to quickly update or publish your content. What actions are available depends on your current mode and the content source.\n\nMenu - You can show additional options such as adding projects and managing your projects, toggling dark and light mode for the Sidekick, or access help.\n\nSign in - If your site has authentication for authors enabled, you will first need to sign in to authenticate before you can use the Sidekick. Even if your site does not require authentication, you must be signed in if you wish to unpublish and delete pages.\n\nClose - Click the X to toggle Sidekick off. Toggle back on using the AEM Sidekick icon next to the address bar in your browser.\n\nThe Sidekick can be invoked in the following contexts:\n\nWithin project environments\nA preview URL (the domain name ending in .aem.page)\nA live URL (the domain name ending in .aem.live)\nA production URL (the domain name being your project’s public host name), custom preview URL, or custom live URL\nNote: This requires adding the project to your configuration\nWithin online editors (depending on your project setup)\nGoogle Docs/Sheets\nMicrosoft Word/Excel\nEnvironment Switcher and Modes\n\nWhen using document-based authoring to create content, content moves from the editor through different environments. The environment switcher allows you to easily jump between them.\n\nSource Mode\n\nWhen editing your document and first using the Sidekick, you default to Source mode. Click the Source button to reveal the additional environments available.\n\nThe environment switcher on a Sidekick in Google Docs.\n\nIn Source mode, you have the following action available.\n\nPreview (re)generates the preview page based on the current document and opens it in a separate tab.\n\nIf you are using the Sidekick in the preview, live, or production environments, the source switcher may also offer an Open in option (a document icon) next to the Source mode option, which opens the online editor of the current page’s source (either a document or the AEM editor) in a separate window.\n\nAt a minimum read access to the document in Google Drive or Microsoft SharePoint is required to use this action.\n\nPreview Mode\n\nThe preview environment reflects the latest changes of the page rendered from the source document. Preview environments are indicated by a blue label. You can send preview URLs to stakeholders so they can review your content before it gets published. Note that this option is only enabled if the content you are looking at has been previewed before.\n\nThe preview URL follows the pattern https://branch--site--org.aem.page/path.\n\nbranch, site and org identify the content source and code base to use\n/path corresponds to the location of the content in Google Drive or Microsoft SharePoint, starting from the root folder.\nA / at the end of a path refers to the index document in a folder.\n\nIn Preview mode, the following actions are available:\n\nEdit is used to open the content source for editing.\nUpdate is used to force a content refresh.\nFor example if you have Source and Preview open in side-by-side tabs while working on the content.\nThe effect is identical to the Preview action in the editor.\nPublish makes the current preview version of the page available in the live and production environments.\nVisitors to your public-facing website will not be able to see changes until they are published.\n\nIf you are previewing a sheet, note that there may be additional options available.\n\nPreviewing Sheets in AEM Authoring\n\nPreview mode for sheets works the same as other pages, however there are additional options available that make managing changes to sheets within AEM authoring easier.\n\nWhen previewing a page, the preview loads the sheet using the Adobe Experience Manager Sites Data Rendition tool, to show the resulting JSON in a user-friendly way.\n\nTap or click the X in the banner to close the Data Rendition tool and view the raw JSON.\n\nThe preview lazy loads the first 1000 rows. You can search the entirety of the spreadsheet using the Search field.\n\nTo visualize what has changed since the last publication, tap or click Show changes.\n\nThis creates a diff, showing what has changed in your sheet between the current published state of the sheet and the content you are previewing.\nAdded lines are in green, prefixed with a plus.\nRemoved lines are in red, prefixed with a minus.\nTo show all rows along with the changes, tap or click Show all rows.\n\nLive Mode\n\nLive mode is only available if there is no production environment yet. It shows the published content and serves as a stand-in for your project’s production environment. Live environments are indicated by a green label. Note that this option is only enabled if the content you are looking at has been published before.\n\nThe live URL looks almost identical to the preview URL, the only difference being in the 1st level domain: https://main--site--org.aem.live/path\n\nIn Live mode, the following actions are available:\n\nEdit is used to open the content source for editing.\nPublish makes the current preview version of the page available in the live and production environments.\nVisitors to your public-facing website will not be able to see changes until they are published.\nProduction Mode\n\nProduction mode takes you to the production environment, which is your project’s public-facing website. Production environments are indicated by a green label.\n\nThis option may not be available during the development stage of your project or because no production environment has been configured yet.\nThis environment can be a 3rd-party or “bring your own” CDN.\n\nIf your public-facing domain name is yourproject.com, the production URL would be https://yourproject.com/path\n\nThe following action is available in Production mode:\n\nPublish makes the current preview version of the page available in the live and production environments.\nVisitors to your public-facing website will not be able to see changes until they are published.\nActivating Sheets\n\nWhen editing a document-based sheet in Google Drive or Sharepoint, the Sidekick’s environment selector is disabled. The only option is to Activate the sheet.\n\nNote that sheets created in AEM authoring allow for additional options.\n\nBulk Actions\n\nOpen the Sidekick in Google Drive or Microsoft SharePoint and select one or more files to bulk preview and publish files and conveniently copy their URLs. The Sidekick counts the number of selected files and prompts you to confirm any bulk actions.\n\nThe Sidekick provides a status of ongoing bulk actions.\n\nWhen the bulk actions are completed successfully, the Sidekick turns green and allows you to copy the URLs of the affected files via the Copy x URLs button or preview them via the Open x URLs button.\n\nIf a bulk action fails, it turns red and provides feedback on the failure via the Details button.\n\nYou can also use the bulk action feature to preview and publish media files. Adobe continues to extend the support for file formats that can be published directly from your content source (Microsoft SharePoint or Google Drive) based on popular use and security considerations. Currently the supported formats are MP4, PDF, SVG, JPG, PNG. You can learn more about file size limits here.\n\nLimitation: in Microsoft SharePoint, large selections of more than 60 files may not be available to the Sidekick and may therefore need to be split into smaller badges.\n\nBulk Preview\n\nSelecting multiple documents in Microsoft SharePoint or GDrive allows you to preview multiple documents en masse.\n\nBulk Publish\n\nSelecting multiple documents in Microsoft SharePoint or GDrive allows you to publish multiple documents en masse. The Sidekick prompts you to confirm before performing bulk actions.\n\nCopy URLs\n\nUse this feature to copy the preview, live or production URLs of one or multiple files.\n\nUnpublishing and Deleting Content\n\nIf you no longer require content to be published, you can unpublish it and/or delete it.\n\nUnpublishing content will remove the page from your live and production environments, but still keep it in the preview environment for reference or future republication.\nDeleting permanently removes the content from all environments, implicitly unpublishing it.\n\nUnpublishing and deleting content require you to be signed into the Sidekick. You must also have the appropriate roles assigned to your user.\n\nTo unpublish you must have the role publish.\nTo delete you must have either the role publish or author.\n\nBecause unpublishing and deleting are not as common as previewing and publishing content, these options are kept behind an ellipsis menu, shown only when appropriate. Navigate to a published page or its preview and click the ellipsis button on the Sidekick to show the Unpublish option.\n\nYou can only delete pages from the preview environment. Deletion implicitly unpublishes the page if it has been published before.\n\nCaution: The Delete and Unpublish actions cannot be undone! Double-check the URL and page content to make sure it is really OK to delete and/or unpublish a page.\n\nDue to the finality of deleting pages, you must confirm your deleting by entering DELETE in a dialog.\n\nAdding Your Project\n\nThe Sidekick is able to auto-detect if a URL belongs to your project. If you want the Sidekick to also recognize custom domains, such as your project’s production domain, custom preview domain, or custom live domain, you must add the project to the Sidekick.\n\nThis can be done in two ways:\n\nFrom your source documents\n\nNavigate to any source document associated with your project. Then click the context (≡) menu on the Sidekick and select Add this project.\n\nFrom your project URL\n\nNavigate to a project URL (similar to https://main--repo--owner.aem.page/) and click the context (≡) menu on the Sidekick and select Add this project.\n\nRemoving projects and migration\n\nYou can remove projects from the Sidekick by following the same steps, but select Remove this project from the context menu instead.\n\nIf you have used a previous version of the Sidekick, you can migrate your previously-configured projects.\n\nEnable both the previous version of the Sidekick and the new version in your browser.\nRight-click the Sidekick plugin icon in your browser’s menu bar.\nSelect Import projects from AEM Sidekick v6 from the context menu.\nManaging Projects (Experimental)\n\nOnce you have added at least one project to your sidekick, you Then click the context (≡) menu on the Sidekick and select Manage projects to open the Project Admin.\n\nThe Project Admin page lists all projects you added to the sidekick.\n\nEach project is listed with:\n\nConvenient links to project resources including\nContent location\nPreview URL\nProduction URL (if configured)\nEdit button to edit the name of the project within the Sidekick or to remove the project from the Sidekick.\nSign in button to sign into the project\nA dropdown menu is available from the Sign in button to select which IDP is used for signing in\nError Messages\n\nFor a complete catalog of error messages appearing in the Sidekick, review Sidekick Errors.\n\nCustomizing the Sidekick\n\nIf you are a developer, you can customize the Sidekick for your project.\n\nPrivacy and Security\n\nReview AEM Sidekick Security for detailed information about how privacy and security are being handled in the Sidekick.\n\n3rd Party Libraries\nLibrary\t License Custom Elements Polyfill\t BSD \n Lit\t BSD 3-Clause \n MobX\t MIT \n Spectrum Web Components\t Apache 2.0\n\nUp Next\n\nSidekick Library","lastModified":"1768836091","labs":""},{"path":"/docs/slack","title":"Slack","image":"/docs/media_16a9ec5b9abba592f9d4436d322696a4e58e4247d.png?width=1200&format=pjpg&optimize=medium","description":"We are available on dedicated Slack channels for AEM customers and the Adobe team is available to answer your questions. We create one Slack channel ...","content":"style\ncontent\n\nSlack\n\nWe are available on dedicated Slack channels for AEM customers and the Adobe team is available to answer your questions. We create one Slack channel for each customer and invite business users, developers, and authors to the channel to coordinate your launch or migration, answer questions about authoring and development, and help with best practices.\n\nThe Adobe team is globally distributed. During US and European business hours you can expect to receive an answer within one hour. Outside those times, responses may take a bit longer.\n\nTo request a Slack channel, reach out to your Adobe contact. For customers who prefer Teams, we can also collaborate with you on Teams.\n\nUp Next\n\nTeams","lastModified":"1765528700","labs":""},{"path":"/docs/architecture","title":"Architecture","image":"/docs/media_12c0b6c754f10cc4e98df8f6e378c543531f7149e.png?width=1200&format=pjpg&optimize=medium","description":"Take a deep dive into the architecture behind Edge Delivery Services and document based authoring in Adobe Experience Manager Sites as a Cloud Service.","content":"style\ncontent\nArchitecture\n\nEdge Delivery Services and document-based authoring are part of the next generation of Adobe Experience Manager’s composable architecture. A key aspect of this architecture is to enable customers to create great experiences using the infrastructure and processes that already enable their success.\n\nHow AEM fits in\n\nhttps://main--helix-website--adobe.hlx.page/docs/architecture-delivery.svg\n\nAt the highest level, Adobe Experience Manager is an on origin service that you plug in to your existing Content Delivery Network (CDN). There are out of the box integrations with Akamai, Cloudflare, Fastly, and Amazon Cloudfront.\n\nThe AEM stack itself is engineered for high performance and availability. In order to achieve best possible availability, all delivery services are run in two fully redundant edge providers, have fully redundant storage infrastructure, and are constantly monitored for performance.\n\nThe full picture\n\nhttps://main--helix-website--adobe.hlx.page/docs/architecture-overview.svg\n\nLooking at the full stack, there are four key layers:\n\nYour customer-specific infrastructure like CDNs, DNS, TLS certificates, etc.\nAdobe’s edge compute layer\nAdobe’s storage layer for edge delivery\nYour customer-specific sources of content and code\n\nIn the visitor-facing tier of this architecture, Edge Delivery Services, customers use their existing CDNs, DNS, certificates, etc. which is then delivering the AEM-produced experience to all modern web browsers, native mobile applications, chat bots, or other backend applications.\n\nThe central tier consists of the experience composition service, a dual-stack (for maximum availability) architecture that serves content stored in highly optimized formats for experience delivery. The storage layer is made up of Content Bus for structured and unstructured content, Media Bus for assets and media, and Code Bus for the code of the site.\n\nThe bottom tier enables authoring productivity and just like the top tier, it composes the tools and infrastructure already in use, be it AEM Authoring, Microsoft Sharepoint, Word, Excel, or Google Drive, Docs, and Sheets.\n\nHow content gets published\n\nhttps://main--helix-website--adobe.hlx.page/docs/architecture-publishing.svg\n\nThe core publishing process is facilitated by the Admin service for preview and publishing and consists of two key steps, both triggered by authors through the sidekick.\n\nThe preview operation in sidekick will pull content from the configured content source such as Sharepoint or Google Drive, or Adobe Experience Manager Sites. This integration is based on the standardized delivery format for structured and unstructured content and can be extended to other third-party content providers. Once the content has been pulled from the originating repository, it will be stored in AEM’s storage layer, separated by structured, unstructured content, assets, and media.\n\nIn a second step, authors can publish content. This operation takes the previewed content and makes it available to the delivery tier of our infrastructure. The most important step here is to purge the CDN. AEM will purge every layer of the caching infrastructure, thanks to its deep integration with content delivery networks.\n\nWhat about code?\n\nLike content, AEM uses code from your customer-provided repositories. Unlike content, code is taken from GitHub, using the AEM code sync app for GitHub that has been installed during the initial tutorial.\n\nThis integration pulls code from all active branches, enabling effective parallel development and scalable testing. When code is merged into the main branch, the CDN will be purged of all affected resources, making deployments fast and easy.\n\nLifecycle differences of Content, Media, Code and Configuration.\n\nInternally AEM separates different resources needed to assemble and deliver a website based on their corresponding lifecycle and manages them separately.\n\nContent (text in documents and spreadsheets, PDFs, SVGs, redirects, etc.) follow lifecycle states of preview and publish have stages of preview and publish. Content is immediately available to all Code branches on the corresponding .aem.page and .aem.live URLs depending on their previewed / published state.\n\nMedia (images, and videos uploaded or copy/pasted into a document) is using Content Addressable Storage internally, meaning that every asset is only stored once with a unique hash following the media_ prefix. Media is accessible on all branch URLs across .aem.page and .aem.live as soon as it is added to the system via a preview operation of the asset itself or a document that contains the asset. Since the hash is produced from the binary content of the asset itself, it is immutable and can be cached permanently.\n\nCode (javascript, css, etc. ) is managed in branches, creating individual environments for every branch. Changes are visible across .aem.page and .aem.live URLs at the same time.\n\nConfiguration (via config service) is using inheritance from a profile and is immediately applied to all branch environments on both .aem.page and .aem.live for a particular site.\n\nUp Next\n\nStaging & Environments","lastModified":"1752066026","labs":""},{"path":"/docs/authoring","title":"Authoring and Publishing Content","image":"/docs/media_1cf7bb3a1af050eff35416bc16502895c1f5a166e.jpg?width=1200&format=pjpg&optimize=medium","description":"How to author, preview and publish content using the AEM Sidekick.","content":"style\ncontent\n\nAuthoring and Publishing Content\nAuthoring Content\nYou already know the most important part.\n\nIf you use Microsoft Word or Google Docs, then you already know how to create content.\n\nYour documents in Word or Google Docs become your pages on your website. Your headings in your documents will become headings on your website. Bold, italic, underlining, lists, images, etc. will all appear on your website.\n\nImages and Videos\n\nTo add an image to your document, drag the image into the page. Word and Google Docs automatically add it as normal. Your image will be resized to fit the browser window of your visitor. Any resizing you do in Word or Google Docs will have no effect.\n\nIt is a good idea to set an alternative text for all images you add to the document, as this increases accessibility and helps search engines find your content.\n\nTo do this, use the built-in features of Document Authoring, Microsoft Word or Google Docs. See the documentation of either product for more details.\n\nDocument Authoring\nMicrosoft Word\nGoogle Docs\n\nMicrosoft Word and Google Docs do not allow you to just drag and drop videos, but you can add videos via SharePoint or Google Drive, preview and publish them using the Sidekick and add the resulting URL as a link to a suitable block in your document.\n\nLinks\n\nLinks are an important part of every website and you can add them both in Word and Google Docs. If you are creating a link within your website, enter the URL as is, even if the page you are linking to is not public yet (e.g. a preview or live URL).\n\nLinks pointing to other pages on the same site will automatically be adjusted to be relative to your site.\n\nYou can link to headings or sections within a page by appending an anchor value to the URL. Heading elements have automatic lower-cased IDs where spaces are replaced with dashes. For example, if a page /about-us has a heading “Our History”, the URL linking to it would be https://<your_host>/about-us#our-history.\n\nSections\n\nOn some websites you have sections or blades that change background color or otherwise indicate breaks in the content. Creating a section break in both Microsoft Word and Google Docs can be done using --- (three hyphens) on a single line. In Google Docs you can also create sections by inserting a Horizontal Line element into the page (select “Insert → Horizontal Line” from the menu).\n\nBlocks\n\nBlocks are a way to work with more structured content and add special functionality to your site. Which blocks are available to your site depends on what your development team has implemented and differs from site to site. The only block that is common to all sites is the metadata block described previously.\n\nRegardless of site, the structure of a block is always the same: it is a table with a merged first row that serves as the block name (header row). The header row may have specific formatting like a background color to increase their discoverability and differentiation in a document.\n\nBlocks usually contain content, configuration, or references to other pieces of content, be it from other documents, spreadsheets, or both.\n\nAs you can see from this example, you are free to put any kind of content into the cells of a block, and it is up to the block to either render the content or ignore it. If the site you are working on uses blocks extensively, then you will probably have a reference list of blocks you can use.\n\nBlocks can have variants in parenthesis. For example, a Columns block can have a (highlight)option which passes a layout hint to the block display logic.\n\nSee the document The Block Collection to learn more about out-of-the-box blocks.\n\nMetadata\n\nSee the document Page Metadata for instructions on how to manage metadata for your pages.\n\nStructured Data in Spreadsheets\n\nYou can put content into spreadsheets and then the spreadsheet is automatically turned into an API that your developers can use. This allows you to use spreadsheets like a headless CMS for use in data tables, navigation, or feature comparisons, for example.\n\nSee the document Spreadsheets and JSON for more information.\n\nPreview and Publish Content\n\nOnce a document is created in Google Drive or Sharepoint, you can preview the corresponding web page and eventually publish the content to your production website.\n\nThe preview function is used to share pages with stakeholders before they are published and available to the general public on your website.\n\nIn order to preview, publish, or delete content, use the Sidekick that can be installed as a browser extension.\n\nPreview\n\nIn Word or Google Docs, open the Sidekick, then click the “Preview” button. This will open a new browser window (check for the popup warning) that has the preview version of your site.\n\nAlthough you can copy and share the URL of this preview, it is not meant for production. It does not have your domain name on it and is invisible to search engines. If the content is ready for publication, you can publish. If you need to make changes, open the Sidekick on the preview page and click “Edit” to go back to Word or Google Docs.\n\nPublish\n\nPublishing makes your content visible to everyone on the internet. To publish something, open the sidekick on a preview page (or follow the instructions above to open the preview again), then click “Publish”. After a few seconds, a new browser window will open, with your page on your public website.\n\nOnce your content has been published, it is visible to everyone on the internet, and search engines will be able to find it.\n\nDelete\n\nGenerally, deleting published content and therefore removing publicly accessible resources from the web can be problematic because of inbound links from search, social, bookmarks and other referring sites. If a page is deleted that was once published, it is recommended to use redirects to make sure that incoming traffic for the deleted page is sent to the next best place. See the document Redirects for more information.\n\nIf you want to remove published content or just delete it from your site as part of a clean-up, doing so is a two-step process.\n\nFirst, delete the source document.\nAlternatively you can rename the page or move it to a different folder, for example your drafts folder.\nThen open the page you want to delete on the preview site, open the sidekick and sign in. You will now see a ... menu with two options: Delete and Unpublish.\nUnpublish removes it from the public production website, but keeps the preview.\nDelete removes the preview, too.\n\nDeleting or unpublishing something is permanent and cannot be undone easily. If you want to undo a deletion, you have to restore the original document and then preview and publish it again.\n\nPrevious\n\nPublish\n\nUp Next\n\nMetadata","lastModified":"1756717198","labs":""},{"path":"/docs/bulk-metadata","title":"Bulk Metadata","image":"/docs/media_1700294000e02ecedd96e97c5f692838c399c0fde.jpg?width=1200&format=pjpg&optimize=medium","description":"By default, metadata is managed at the page level, but in some cases, it is useful to apply metadata en masse to a website. Common ...","content":"style\ncontent\n\nBulk Metadata\n\nBy default, metadata is managed at the page level, but in some cases, it is useful to apply metadata en masse to a website. Common use cases include:\n\nDefault metadata such as image should be applied to the entire website to ensure every page has an image defined.\nA certain section of a website should look and feel different from the rest of the website (such as a different template or a theme).\nA certain section of the website should not be indexed or crawled (robots set to noindex).\n\nIf you want to create metadata for many pages at once, create a metadata sheet in the root folder of your website content.\n\nName the file metadata in Document Authoring, Google Drive or AEM.\nName the file metadata.xlsx in SharePoint.\n\nThe workbook should have only one sheet and at least two columns like in the following image:\n\nThe column titled URL has the URL pattern of the pages that should get a particular metadata entry.\n\nThe wildcard * (the asterisk) can be used as a prefix or suffix, allowing for flexible matches on the URL pathname. Typical examples include /docs/** or **/docs/**.\n\nThe metadata sheet is evaluated from top to bottom, site wide metadata set to ** must be before more specific entries.\n\nFor each metadata property, create a column in the worksheet and name it using the property you want to assign. Typical examples include template, theme, or robots. Property names will be lower-cased in the HTML.\n\nPage-level metadata added via a metadata block takes precedence over bulk metadata. See Page Metadata and Metadata (block) for more information.\n\nNote: You need to preview and publish the metadata sheet in order to see changes reflected on your site.\n\nhttps://main--helix-website--adobe.aem.page/docs/special-metadata-properties\nOmitting Metadata Values\n\nTo explicitly remove metadata a \"\" can be used as a value. This will remove the element or set the corresponding attribute to \"\" for a particular path.\n\nExample:\n\nURL          Canonical\n/**          \"\"\n\n\nThe example above will remove the <link rel=\"canonical\"> from all pages by default, unless there is a specific override for example from a page metadata block.\n\nAdditional Metadata\n\nWhen metadata is managed by several teams, it is not practical to keep them all in a single metadata file. Multiple metadata files can therefore be optionally configured in the site configuration:\n\ncurl -X POST https://admin.hlx.page/config/{org}/sites/{site}/metadata.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: {your-auth-token}' \\\n  --data '{\n    \"source\": [\n        \"/metadata.json\",\n        \"/metadata-2nd.json\",\n        \"/metadata-seo.json\"\n    ]\n}'\n\n\nThe order of the entries in the array dictates the order of how the data is applied. Note that duplicate metadata properties can be overwritten by subsequent sources, but never deleted. In the above example, if the /metadata.json defines a property title, the same property in /metadata-2nd.json will overwrite the value, but only if it is not empty.\n\nMetadata Source Hierarchy\n\nDefault hierarchy:\n\nPage level metadata block wins over\nFolder-mapped metadata* sheet wins over\nBulk metadata sheet (/metadata.json)\n\nHierarchy if there is additional metadata configured:\n\nPage level metadata block wins over\nFolder-mapped metadata* sheet wins over\nMetadata sheets in configured order\n\n* deprecated\n\nPrevious\n\nPage Metadata\n\nUp Next\n\nPlaceholders","lastModified":"1759491633","labs":""},{"path":"/docs/byo-cdn-akamai-setup","title":"Akamai Setup","image":"/docs/media_1d07c43f5918a053faf2f434f65680737bd41d657.jpg?width=1200&format=pjpg&optimize=medium","description":"The following screenshots illustrate how to use the Akamai Property Manager to configure a property to deliver content from AEM using your Akamai CDN setup. ...","content":"style\ncontent\n\nAkamai Setup\n\nThe following screenshots illustrate how to use the Akamai Property Manager to configure a property to deliver content from AEM using your Akamai CDN setup. Essential settings are marked with a red circle.\n\nEssential Property settings\nOrigin Server\n\nConfiguration properties:\nName\t Value\t Comment \n Origin Server Hostname\t main--<repo>--<organization>.aem.live\t Replace repo and organization with the values for your site. \n Forward Host Header\t Origin Hostname\t \n Cache Key Hostname\t Incoming Host Header\t \n Send True Client IP Header\t No\t \n Verification Settings\t Choose your own\t \n Use SNI TLS Extension\t Yes\t \n Trust\t Akamai-managed Certificate Authorities Sets\t \n Akamai Certificate Store\t Enabled\t\n\n\n⚠️Do NOT set Trust to Specific Certificates (pinning) as the certificate is periodically renewed and HTTPS traffic would be served with certificate errors due to the expired pinned certificate.\n\nAdd Behavior: Remove Vary Header\n\nConfiguration properties:\nName\t Value\t Comment \n Remove Vary Header\t On\t\nAdd Behavior: Modify Outgoing Request Header\n\nWe will need a number of outgoing request headers, please see the table below. Keep the \"avoid duplicate headers\" setting enabled for all.\n\nConfiguration properties:\nAction\t Select Header Name\t Custom Header Name\t New Header Value \n Modify\t Other\t X-Forwarded-Host\t {{builtin.AK_HOST}} \n Modify\t Other\t X-BYO-CDN-Type\t akamai \n Modify\t Other\t X-Push-Invalidation\t enabled\nAdd/Modify Behavior: Caching\n\nConfiguration properties:\nName\t Value \n Caching Option\t Honor origin Cache-Control \n Enhanced RFC support\t No \n Honor private\t No \n Honor must-revalidate\t No\nAdd Behavior: HTTP/2\n\n(Optional, but recommended)\n\nAdd Rule: Modify Outgoing Response Header\n\nIn the list of rules in the sidebar, click the button \"+ Rules\"\n\nSelect \"Blank Rule Template\", set a name such as \"Conditionally strip headers\" and click \"Insert Rule\".\n\nTo set the criteria for the rule to be applied click \"+ Match\"\n\nThen select:\n\nIf\nPath\nDoes not match one of\n*.plain.html\n\nClick \"+ Behavior\" and \"Standard property behavior\" to set the behavior if a match is found\n\nThen select \"Modify Outgoing Response Header\"\n\nWith following values:\n\nAction: Remove\nSelect Header Name: Other\nCustom Header Name: X-Robots-Tag\n\nThese are all essential property settings for delivering content.\n\nOptional: Authenticate Origin Requests\n\nWhen using token-based Site Authentication, add the following under \"Add Behavior: Outgoing Request Headers\"\n\n\nConfiguration properties:\n\nName\t Value\t Comment \n Action\t Modify\t \n Custom Header Name\t Authorization\t \n New Header Value\t token <YOUR_TOKEN_HERE>\t Replace with the site token value received in token-based Site Authentication \n Avoid Duplicate Headers\t Yes\t\n\nThis setting will ensure that Akamai authenticates requests from your CDN to the AEM Origin, which validates the token received in the Authorization header.\n\nCaveats\n\nDo not enable Akamai mPulse Real Usage Monitoring. While the performance impact on most sites is negligible, for sites built for consistent high performance, enabling it will prevent reaching a Lighthouse Score of 100. In AEM, you have a Real Use Monitoring service built-in, so that dual instrumentation will be unnecessary and is strongly discouraged.\n\nAlso, do not enable Akamai Bot Manager Premier (also called “Transactional Endpoint Protection”) or similar Web Application Firewall offerings, as they markedly interfere with rendering performance and user experience. Your site on AEM is protected against bot attacks on the backend, so that this performance cost comes with negligible benefit.\n\nhttps://main--helix-website--adobe.hlx.page/docs/setup-byo-cdn-push-invalidation-for-akamai\n\nPrevious\n\nBYO CDN Setup Overview","lastModified":"1747144509","labs":""},{"path":"/docs/byo-cdn-cloudflare-worker-setup","title":"Cloudflare Setup","image":"/docs/media_106b5523b26f7079aad2f4e6f54b9fd98a355fe34.jpg?width=1200&format=pjpg&optimize=medium","description":"The following screenshots illustrate how to configure Cloudflare to deliver content. Essential settings are marked with a red circle.","content":"style\ncontent\n\nCloudflare Setup\n\nThe following screenshots illustrate how to configure Cloudflare to deliver content. Essential settings are marked with a red circle.\n\n\nThis setup can be completely done in the browser by using the Cloudflare Dashboard only. If you are already familiar with Cloudflare Workers, Wrangler & GitHub and not afraid of entering commands in a terminal window you might want to follow the instructions for Cloudflare with wrangler instead.\n\nCreate a Cloudflare site\n\nIf you already have a Cloudflare site and DNS set up you can skip forward to the Setup push invalidation section.\n\nEnter the domain:\n\n\n\nSelect a plan:\n\nFor this walk-through we’ll use the Free plan.\n\nhttps://main--helix-website--adobe.hlx.page/docs/setup-byo-cdn-push-invalidation-for-cloudflare\nDNS Setup\n\nFor a new site, we’ll start with a simple DNS setup.\n\nCreate a new CNAME record. If your zone is mysite.com and you want to serve traffic on www.mysite.com, then the name should be www\nIf you want to serve traffic on example.com (without a www) then the name should be @\nAnd if you want to serve traffic on all subdomains, then the name should be *\nAs we are using workers to serve the content, the value of the Content field does not matter. It’s easiest to use your ref--repo--owner.aem.live host name here. This is a hostname, not a URL, so leave out the leading https://\n\nMake sure the CNAME record is Proxied:\n\n\n\nSSL/TLS Setup\n\nSelect SSL/TLS from the left pane and Edge Certificates in the dropdown list. On the right side, scroll down to Always Use HTTPS and enable the setting:\n\nConfigure Caching\n\nNavigate to Caching → Configuration and adjust following settings:\n\nCaching Level: Standard\nBrowser Cache TTL: Respect Existing Headers\n\nCreate Cache Rules\n\nNext browse to Caching → Cache Rules and create a new cache rule\n\nNote: Enable Origin Cache Control option is only available for enterprise accounts. Free, Pro, and Business customers have this option enabled by default and cannot disable it.\n\nFollowing settings should apply:\n\nWhen incoming requests match: hostname contains mydomain.com\nThen: Eligible for cache\n\nUnder Browser TTL, click \"add setting\", then apply \"Respect Origin TTL\"\n\nCreate Worker\n\nReturn to the Cloudflare Dashboard homepage, then navigate the sidebar: Workers & Pages → Overview. Click \"create\" to create a new worker.\n\nOn the next screen, click \"create worker\", because that's what you want to achieve.\n\n\nEnter a name for the worker (e.g. aem-worker) and click on “Deploy”:\n\n \n\nOn the next screen, click \"Edit code\"\n\nEdit worker code\nCopy the content of this file.\nIn the left pane, replace the existing content with the copied content.\nClick on “Deploy”\nConfirm with \"Save and Deploy\"\n\nNote: If you are using the recommended content-security-policy with ‘strict-dynamic’ and (cached) nonce , please make sure to use the latest worker code, which removes the header in 304 responses, to ensure that there is no mismatch between the nonce cached in your site user’s browser and the one returned by the server.\n\n\nReturn to your worker (by clicking the back arrow in the top right corner), then click on Settings → Variables and “Add variable”:\n\nAdd a variable ORIGIN_HOSTNAME and set the value to the hostname of your origin (e.g. main--mysite--aemsites.aem.live):\n\nIf you have enabled push invalidation, add a second environment variable PUSH_INVALIDATION = enabled.\n\nApply the changes by clicking \"Deploy\".\n\n\nNext, click on Triggers an select “Add route”:\n\n \n\nEnter your domain route (e.g. www.mydomain.com/*), select your zone and confirm with “Add route”:\n\n\n\nDepending on the setup chosen in DNS Setup, you would select routes www.mydomain.com/* or mydomain.com/*\n\nWarning: if you select *.mydomain.com/* as Cloudflare suggests in the field default, your site will be available under multiple subdomains. This will invite attackers trying to open webmail.mydomain.com and similar sites, and lead to duplicate content, potentially depressing your search engine rankings.\n\nAfter completing all steps you should be all set.\n\nurl\nRoute\nOptional: Authenticate Origin Requests\n\nWhen using token-based Site Authentication, add the following to enable authenticated requests from Cloudflare to AEM.\n\nReturn to Workers → <your worker> → Settings → Variables\nCreate a new environment variable ORIGIN_AUTHENTICATION\nPaste the site token value from token-based Site Authentication (it starts with hlx_)\nConfirm by clicking \"Deploy\"\n\nExpanding the AEM footprint on your website\n\nIn case you start with having only a portion of the website being routed to your .live origin and have routed a specific folder (eg. /blog/*) you can subsequently add more routes whenever you are ready to expose new sections of the site by simply adding more routes and repeat the last “add route” steps as needed, without changing your worker configuration.\n\nWatch out for duplicate content\n\nSearch engines often penalize sites for duplicate content, so it's important to make sure your content is not available on the web elsewhere. Cloudflare, unfortunately, has a default setting that will expose your site on additional network ports. In paid Cloudflare plans you can block traffic on these additional ports. This is a recommended setting for production sites.\n\nPrevious\n\nBYO CDN Setup Overview","lastModified":"1749131071","labs":""},{"path":"/docs/byo-cdn-cloudflare-worker-wrangler-setup","title":"Cloudflare Setup (with wrangler)","image":"/docs/media_1bfd2c2f03573acaaddcd537d21275f24ceee9cf4.png?width=1200&format=pjpg&optimize=medium","description":"The following screenshots illustrate how to configure Cloudflare using the wrangler command line interface to deliver AEM content. Essential settings are marked with a red ...","content":"style\ncontent\n\nCloudflare Setup (with wrangler)\n\nThe following screenshots illustrate how to configure Cloudflare using the wrangler command line interface to deliver AEM content. Essential settings are marked with a red circle.\n\nCreate a Cloudflare site\n\nEnter the domain:\n\n\n\nSelect a plan:\n\nFor this walk-through we’ll use the Free plan.\n\nhttps://main--helix-website--adobe.hlx.page/docs/setup-byo-cdn-push-invalidation-for-cloudflare\nDNS Setup\n\nWe’ll skip the DNS setup step as this would be beyond the scope of this simple walk-through. Make sure the CNAME record is Proxied:\n\nSSL/TLS Setup\n\nSelect SSL/TLS from the left pane and Edge Certificates in the dropdown list:\n\nOn the right side, scroll down to Always Use HTTPS and enable it:\n\n\n\nConfigure Caching\n\nCreate Page Rule\n\nCreate Worker\n\nFork or create a new GitHub repository using this template.\n\nClone the repository and follow the instructions in the README. You can skip directly to step 2.\n\nAfter completing all steps you should be all set.","lastModified":"1743757663","labs":""},{"path":"/docs/byo-cdn-cloudfront-setup","title":"Amazon Web Services (AWS) CloudFront Setup","image":"/docs/media_14f7e30bdfd5f95c1e5fca4e6ca48ccd78ff5d3c1.png?width=1200&format=pjpg&optimize=medium","description":"The following screenshots illustrate how to configure AWS CloudFront to deliver content from an AEM origin. Essential settings are marked with a red circle.","content":"style\ncontent\n\nAmazon Web Services (AWS) CloudFront Setup\n\nThe following screenshots illustrate how to configure AWS CloudFront to deliver content from an AEM origin. Essential settings are marked with a red circle.\n\nCreate a CloudFront distribution\n\nConfigure the origin\n\nUse main--sitename--orgname.aem.live as the Origin domain.\n\nAdd following custom headers:\n\nX-Forwarded-Host: your domain name\nX-BYO-CDN-Type: cloudfront\n\nIf you have successfully configured push invalidation for your project you should also add the following custom header:\n\nX-Push-Invalidation: enabled\n\nCache behavior\n\nKeep the default settings here.\n\nCache key and origin requests\n\nClick \"create cache policy\"\n\nCreate cache policy\n\nSet the Default TTL to 300 seconds.\n\nUnder \"cache key settings\", keep the defaults:\n\nHeaders: none\nCookies: none\nCompression support: gzip, brotli\n\nAnd override the following:\n\nQuery Strings: Include the following query strings\nwidth\nheight\nformat\noptimize\nlimit\noffset\nsheet\n\nClose the browser tab and return to the previous screen. On this screen, click \"create origin request policy\".\n\nCreate origin request policy\n\nKeep the default settings:\n\nHeaders: none\nCookies: none\n\nAnd override the following:\n\nQuery Strings: Include the following query strings\nwidth\nheight\nformat\noptimize\nlimit\noffset\nsheet\n\nThen click create and close the tab to return to the previous screen.\n\nApply Cache policy and origin request policy\n\nAfter returning to the distribution properties, click the reload buttons next to the Cache policy and origin request policy dropdowns, so that the two newly-created policies show up. Next, select the new policies for both Cache policy and origin request policy.\n\n\n\nCreate distribution\n\nSelect whether you want to enable a Web Application Firewall (WAF). Your AEM origin requires no WAF and use of a WAF is neither required nor recommended for AEM origins.\n\nScroll to the end of the page and click \"create distribution\". We need to return to the configuration later, so remember the ID of your distribution.\n\n\n\nCreate Function to remove Age and X-Robots-Tag headers\n\nIn the Cloudfront sidebar, select \"Functions\" and click \"Create function\".\n\nEnter a name for the function (e.g. stripHeaders), an optional description and click on “Create Function”:\n\nReplace the code of the function with the following snippet and click on “Save changes”:\n\nfunction handler(event) {\n    const response = event.response;\n    const request = event.request;\n    const headers = response.headers;\n\n    // Strip age header\n    delete headers['age'];\n\n    // Check if the request URL does not end with '.plain.html'\n    if (!request.uri.endsWith('.plain.html')) {\n        delete headers['x-robots-tag'];\n    }\n\n    return response;\n}\n\n\nSelect the \"Publish\" tab and click on “Publish function”:\n\nFinally, associate the function with your distribution by scrolling down to \"Associated distributions\" and click \"Add association\".\n\n\nIn the following dialog, select:\n\nDistribution: the ID of your new distribution\nEvent type: viewer response\nCache behavior: default\n\nFinally, click \"add association\"\n\nThat’s all (more or less). Please test the distribution in a stage environment.\n\nOptional: Authenticate Origin Requests\n\nIf you have enabled token-based Site Authentication, go back to Cloudfront → Distributions → <your distribution> → Origins → <your AEM origin> → Edit\n\nUnder \"Add custom header\", select \"add header\" and create a header Authorization with value token <your-auth-token>. Replace <your-auth-token> with the token value created through token-based Site Authentication (it starts with hlx_) as the header value.\n\n\nThis will ensure that all requests from the AWS Cloudfront CDN to your AEM origin use the correct authorization.\n\nhttps://main--helix-website--adobe.aem.page/docs/setup-byo-cdn-push-invalidation-for-cloudfront\n\nPrevious\n\nBYO CDN Setup Overview","lastModified":"1729252302","labs":""},{"path":"/docs/byo-cdn-fastly-setup","title":"Fastly Setup","image":"/docs/media_1c8e056645c57ad87499ef645e28e010db5583b02.jpg?width=1200&format=pjpg&optimize=medium","description":"The following screenshots illustrate how to configure Fastly to deliver content. Essential settings are marked with a red circle.","content":"style\ncontent\n\nFastly Setup\n\nThe following screenshots illustrate how to configure Fastly to deliver content. Essential settings are marked with a red circle.\n\nhttps://main--helix-website--adobe.aem.page/docs/setup-byo-cdn-push-invalidation-for-fastly\nCreate a Fastly service\n\nGo to the Fastly Management UI and select Create Service, CDN.\n\nAdd Domain\n\nAdd your production domain (e.g. www.mydomain.com):\n\n\nConfigure Origin\n\nAdd your origin (e.g. main–{site}--{org}.aem.live) and keep the default settings for:\n\nOverride default host\nDefault compression\nForce TLS & HSTS\n\nIn the new configuration, click \"Edit configuration\" in the top right corner and \"clone version 1 to edit\".\n\nIn the sidebar, select \"Hosts\" underneath \"Origins\" and click the pencil icon to change host settings.\n\nScroll down and change Shielding to Ashburn Metro (IAD) (non-mandatory but recommended setting):\n\nDon't forget to \"update\".\n\nCreate VCL Snippets\n\nCreate a VCL snippet for the recv subroutine with the following VCL code:\n\nif (fastly.ff.visits_this_service == 0) {\n  # edge delivery node\n  if (req.url.qs != \"\") {\n    # remember query string\n    set req.http.X-QS = req.url.qs;\n\n    if (req.url.path !~ \"/media_[0-9a-f]{40,}[/a-zA-Z0-9_-]*\\.[0-9a-z]+$\" \n      && req.url.ext !~ \"(?i)^(gif|png|jpe?g|webp)$\"\n      && req.url.ext != \"json\"\n      && req.url.path != \"/.auth\") {\n      # strip query string from request url\n      set req.url = req.url.path;\n    }\n  }\n}\n\n\nCreate additional VCL snippets for the miss and pass subroutines with the following VCL code:\n\nset bereq.http.X-BYO-CDN-Type = \"fastly\";\nset bereq.http.X-Push-Invalidation = \"enabled\";\n\n\nNote: The X-Push-Invalidation: enabled request header enables the push invalidation including long cache TTLs.\n\n\nCreate a deliver snippet with the following VCL code:\n\nif (fastly.ff.visits_this_service == 0) {\n  # on edge delivery node\n  if (\n    http_status_matches(resp.status, \"301,302,303,307,308\")\n    && req.http.X-QS\n    && resp.http.location\n    && resp.http.location !~ \"\\?.*\\z\"\n  ) {\n    # preserve request query string in redirect location\n    set resp.http.location = resp.http.location \"?\" req.http.X-QS;\n  }\n}\n\n\n\nFinally create a deliver snippet with the following VCL code:\n\nunset resp.http.Age;\n\nif (req.url.path !~ \"\\.plain\\.html$\") {\n  unset resp.http.X-Robots-Tag;\n}\n\n\nAfter completing all steps and activating the service version you should be all set:\n\nOptional: Authenticate Origin Requests\n\nIf you have enabled token-based Site Authentication, navigate in the sidebar to Content → Headers, then \"create a header\" with following settings:\n\nName: Origin Authentication\nType: Request/Set\nDestination: http.Authorization\nSource: \"token <your-token-here>\" (don't forget the quotes, and replace <your-token-here> with the site token retrieved in token-based Site Authentication – the token starts with hlx_)\nIgnore if set: no\nPriority: 10\n\nNote\n\nEdge Delivery Services needs no Web Application Firewall, as it is running on hardened, shared, and ultra-scalable infrastructure. Requests that a WAF would typically intercept are terminated in our CDNs.\n\nPrevious\n\nBYO CDN Setup Overview","lastModified":"1772533192","labs":""},{"path":"/docs/byo-cdn-setup","title":"BYO CDN Setup","image":"/docs/media_141d72765656534440a69d2bf6e223feef96def5d.png?width=1200&format=pjpg&optimize=medium","description":"Customers may use their own CDN to deliver AEM content under their own domain (aka BYO Production CDN). While customers are generally free to configure ...","content":"style\ncontent\n\nBYO CDN Setup\n\nCustomers may use their own CDN to deliver AEM content under their own domain (aka BYO Production CDN). While customers are generally free to configure their CDN according to their own needs there are some settings mandated/recommended by Adobe Experience Manager:\n\nOrigin url\nhttps://main--<yoursite>--<yourorg>.aem.live\nHeaders sent to origin\nX-Forwarded-Host: <production domain>\nX-Push-Invalidation: enabled\n(see Configuring Push Invalidation)\nOrigin cache control\nI.e. Cache TTL on the production CDN is controlled via origin cache control response headers. This should be enabled (if available).\nCompression (gzip)\nShould be enabled\nQuery parameters\nMust be forwarded to origin\nMust be included in cache key\nAge response header\nThe Age response header must be either suppressed or overridden (Age: 0)\nVendor-specific setup instructions\n\nIf you already have a CDN, follow the instructions below. If you are not sure which CDN to pick, follow our guide to CDN selection.\n\nCloudflare Worker Setup\n\nLearn how to configure Cloudflare to deliver content.\n\nAkamai Setup\n\nDiscover how to use the Akamai Property Manager to configure a property to deliver content\n\nFastly Setup\n\nThis guide illustrates how to configure Fastly to deliver content.\n\nCloudFront Setup\n\nSet up Amazon Web Services Cloudfront to deliver your AEM site with push invalidation\n\nAdobe-managed CDN\n\nUse the CDN included in your Adobe Experience Manager Sites as a Cloud Service license.\n\n\n\n\nIMPORTANT: The production CDN setup should be validated and tested in a stage environment prior to going public.\n\n\nNote: In case you have not yet completed the upgrade from hlx.live to aem.live, you can find links to the hlx.live-specific versions of the CDN documentation here.","lastModified":"1742214515","labs":""},{"path":"/docs/davidsmodel","title":"David’s Model, Second take.","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"A long time ago, in a galaxy far, far away, I got to a point where I realized that people had a lot of different ...","content":"style\ncontent\n\nDavid’s Model, Second take.\n\nBy: David Nuescheler\n\nHistorical Context\n\nA long time ago, in a galaxy far, far away, I got to a point where I realized that people had a lot of different options on how they could model content in a content repository specification that I was involved in.\n\nSo in 2007 I started writing up my controversial opinions around content modeling in JCR and to make sure that people considered it “just my opinion” and nothing that would claim universal applicability I called it “David’s Model”.\n\nIt seems that this has helped people in the context of modeling content on an infrastructure level.\n\nToday, I find myself very often in situations that feel the same when arguing “How to model or structure content in Adobe Experience Manager”.\n\nThe parameters and stakeholders feel very different today, as I am not primarily worried about the underlying infrastructure anymore but about the user experience of a person authoring content and reaching a great authoring experience across different content sources and to a lesser extent the ease of developing against those structures and being able have portable (block) code across projects and content sources.\n\nIntroduction\n\nThis document should serve as a collection of “Content Modeling” or “Content Structure” best practices as they relate to Adobe Experience Manager and more importantly to an intuitive authoring experience across different authoring platforms. A good way of testing a content model, is to imagine an author working in different environments (eg. Word, Google Docs, AEM, Custom Authoring etc.) and making sure that the content model can easily be constructed in an intuitive manner across all possible content sources.\n\nThese “rules” are a reflection of lessons learned from first-hand authoring and authoring support and are rooted in the experiences working on real-world limitations of commonly used authoring environments like Microsoft Word Online or Google Docs, but also in the average knowledge of said tooling by the average author.\n\nMaking the authoring experience intuitive, simple and fast is paramount for the long-term success of any project as lays the foundation of authors enjoying making updates to websites or other digital experiences.\n\nThese rules are evolving, and I would like to invite discussion and commentary on all of them.\n\nRule #1: Blocks are not great for authoring\n\nGenerally blocks are not great as they are surfaced as tables on the authoring side. They provide a necessary framework for an author to indicate some special functionality or design for a certain component. For authors it is often easier to work in “Default Content” wherever possible.\nFor developers blocks are a great way to componentize their work, so there is tension where the developer feels that having something in a block makes their lives easier, sacrificing authoring ease-of-use.\n\nThe number of blocks and block variants, as well as (section-) metadata that influence layout should be limited by a design system. Often variants or blocks can be inferred, meaning that for example a carousel containing a video or a quote with an image should not be its own (explicit) block or variant but can be inferred automatically from the content.\n\nA lot of content that is referenced via URL (e.g. a video, modal, fragment or an embed) can often be auto blocked, and the author may just be able to paste the URL to get the desired result.\n\nIt is definitely an anti-pattern to have things that are represented natively as default content and put them into a block, so something like a Text, Heading or Image block yields a bad authoring perspective.\n\nRule #2: No nested blocks for authors\n\nTo a developer it might very often be tempting to nest complex structures, which in word document would lead to a table inside a table. As Rule #1 states that blocks are not desirable, nested blocks are definitely a lot worse.\n\nConsider fragments (referencing other documents) or links (with auto blocking) to reduce the authoring complexity.\n\nRule #3: Limit Row and Column spans\n\nGenerally we use a Column Span (merged cells) to denote the header with a block name in it. This is relatively straightforward and works well in word and google docs.\nThere are definitely situations where more complex table structure make sense (eg. a portion of the block content being in two columns and another portion of the block content being in three columns) but it is important to understand that creating and managing these structures can be extremely difficult, especially in word online that has very enigmatic support for complex tables.\n\nIf you find yourself in a place where you have a non-trivial rows and columns setup with spans / merged cells, it is probably a good idea to consider a different structure.\n\nRule #4: Fully qualified URLs only\n\nWhen referencing content sometimes developers think in references that are relative to host, the content repository or to their sharepoint / google drive. Authors (and most humans) often think of a URL as an opaque token that they copy/paste from their browser without deciphering them into protocol, hostname, pathname, etc.\nIt is always advisable to just let authors work fully qualified URLs and let either AEM or a developer do the work of extracting eg. a pathname where needed. As a bonus the URL can and should link to something that is easily accessible for an author from their document.\n\nRule #5: Lists?\n\nI often find myself in a situation where a block has a list of references, something like a list of related articles, or a list of cards. In HTML a lot of those semantically should be considered lists (mostly <ul><il> combinations). For simple lists, something like some text (possibly with a link) or just regular links, a list in word or google docs may be ideal.\n\nIt turns out that list items that are more complex, are somewhere between “hard” to “practically impossible” for an author to keep that in a list in word or google drive.\nIn that case it is much easier to have the list items be rows of a block table. A good example of that is cards block in the boilerplate project.\n\nFor simple lists where it is intuitive to have inferred semantics, eg. a related articles block in a blog post that just contains links to the articles that should be references, it may be easiest to just have a single table cell inside the block containing all the links and dropping the actual list in the word processor. From a code standpoint it is usually easy to just pull all the links from a block and not specifically worry about the details of a structure in that particular block.\n\nRule #6: Buttons need to inherit from context\n\nIn many design guides we find buttons as a common element across many blocks and default content. In many cases they are outlined in all their variations (eg. primary vs. secondary, sizes, colors, etc.) at the beginning of a design specification together with the specified colors and fonts.\n\nIn projects we found that it is intuitive for authors to treat links that are on a line by themselves as for a button. In many cases it is important to inherit from the block and section context that a particular button is in to make the authoring experience easy.\n\nAs an example, if a button (read, “link by itself on a line”) is a part of a hero block, it might assume a certain bigger size, or if a button is in a section that has an inverted background color it might need to automatically switch to a different foreground / background color combination.\n\nThere are cases where within a given section / block context the author needs to be offered a set of explicit choices (eg. primary vs. secondary button) and in those cases we use combinations of bold and italic, usually bold for an explicit primary button and italic for an explicit secondary button.\n\nIt is conceivable that within a given block / section context there are more than four options for an author to choose from, in which case other formatting options could be used like underline, strikethrough etc. however, this is extremely rare and usually an indication that a decision that should be made within the design system is delegated to the author, leading to a less intuitive authoring experience.\n\nRule #7: Filenames matter to authors\n\nThere are a few content management systems that append trailing slashes to all their URLs and when migrating from websites that are powered by those systems an intuitive approach could be to map every single URL to an index (.docx or gdoc) inside a folder. The downside of this approach is that the filenames are not really useful anymore for authors when they are searching for files in gdrive and sharepoint.\n\nA better approach is to remove the trailing slashes from the URLs and redirect with a 301 (usually from the redirects spreadsheet) from the existing URLs with a trailing slash to the URLs without a trailing slash.\n\n(related: the same approach should also be used for other undesirable URLs for example URLs that end in .html)\n\nThere are situations where this change results in too much a temporary SEO impact, in which case rewriting the URLs on the CDN may be the appropriate option, however this should only be done if there is a quantifiable business impact. long term it is more desirable to have a clean URL that maps directly to the corresponding file in sharepoint or gdrive.\n\nRule #8: Access Controls and Content Grouping\n\nIt often makes sense to group content similar to how authoring teams are organized. A good way of thinking about this is that if you have a team that looks after the blog section on your website, technical documentation, support content, a particular country / language or a product it makes sense to keep that content together, and make sure that corresponding team has access to the content.\n\nIn organizations this often happens naturally, and it is intuitive for organizations to break up their content teams similar to the structure of their URL space. In some cases the URL structure doesn’t align with authoring teams (and the corresponding access control groups). In such a case combining multiple AEM projects on the CDN tier tying together content from different sources may be the right approach if the complexity and size of a site (single domain) gets overwhelming.\nWhile your content source (Sharepoint and Google Drive) possibly supports complex access control models, it is desirable to keep Access Control as simple as possible from a management standpoint.\n\nBoth Sharepoint and Google Drive have a concept for grouping content that helps to manage access control in a simple manner, in Sharepoint they are called “Sites” or “Libraries” and in Google Drive they are called “Shared Drives”. Both of those have predefined access control roles that are advantageous to use for simple group membership based access. Unless there are specific access control requirements it is recommended to keep access control to these OOTB groups.\n\nSites/Libraries and Share Drives are built to work well with a certain team and content size and complexity.\n\nParticularly Sharepoint has best practices on access control complexity within a library, and creating access control complexity beyond that may yield undesirable results. In case you get to a place where it becomes unnatural to manage either the size of the content or the size and complexity in a single Site/Library or Shared Drive, it is likely that it makes sense to break things up into multiple different Site/Libraries or Shared Drives and blend the content together from different projects on the CDN.\n\nRule #9: Number of Blocks and Variants\n\nOver the lifecycle of a website it is common that new blocks and variants are added. Especially for developers that are not very familiar with the existing block library of the project it is usually the easiest path to just add net new blocks or variants to make sure that there is no regression with existing content.\n\nWhile it is probably not easy to avoid the sprawl of blocks and variants on projects that have a lot of functionality or justified requirements for a lot of visually diverse content within a single project, it is important to make sure that the core set of blocks and variant combinations that authors need to use commonly is limited. There are situations where special blocks are used infrequently on sites and often those are placed by developers initially, those blocks probably don’t need to be exposed to authors at the same level in documentation or a block library as the commonly used blocks.\n\nMore generally, large block libraries and a lot of variant / section metadata combinations are less desirable. Maintaining a “minimum use” criteria for blocks and variants based on a content report is a good practice to deprecate and remove superfluous code from the block library that’s exposed to authors.\n\nRule #10: Limit number of Columns\n\nA large number of columns is not a good ideal for authoring as there are practical horizontal screen / document size limitations. More importantly this is usually a symptom that content is split into small values that do not reflect a proper use of default content and the implied HTML semantics.\nThere are some exceptions to this rule in cases where data is represented in a table, as opposed to content that should be a part of document semantics, and in those cases it is often useful to go the name-value pair route via a spreadsheet instead.\n\nRule #11: Use the block collection content models\n\nThe AEM Block Collection is a great source for well designed content models. If your block is producing a similar feature set as the one of the blocks in the Block Collection, the content model should be similar.\n\nRule #12: Fragments may be harmful\n\nFragments are very useful when the same content is used across a lot of different pages. Obvious good examples are header (navigation) and footer information that is identical throughout a site. These are great examples especially since authors of individual pages do not have to worry about that content showing up on their page, and there is no authoring impact.\nUsing Fragments may also be useful in situations where there is an explicit selection of a content that is used across many pages of a site, such as a sign up form, a legal disclaimer, etc. and the content appears on a page but is not really a part of the canonical content of the page.\n\nIt is important to note that using a fragment comes at a cost of complexity and indirection for authors. Instead of seeing the actual content that is on a page, an author only sees a reference to a fragment, which makes it much less intuitive for authors to make changes and gauge the impact of their changes across pages. This is even amplified in cases of nested fragments.\n\nAlong those lines, from an SEO standpoint it is only advisable to use fragments at times where having duplicate content is acceptable (meaning that the content inside a fragment doesn't carry significant SEO weight for that page), hence content that is relevant from an SEO standpoint should always be placed on the page directly.\n\nRule #13: Don't overload image alt-text semantics\n\nIt may be convenient at times to put extra information hidden away into image alt-texts, but this is only recommended in exceptional cases.\nAlt-texts often cannot be easily discovered by authors, there is very little indication about their existence in common document authoring environments (eg. word or google docs).\nDepending on the type of copy/paste operation the alt-text may be lost without the author noticing, and if the alt-text contains special semantics, authors will have to be familiar with specific semantics within the value of an alt-text of individual images on a per block basis.\n\nRule #14: Use Name/Value pairs only for configuration\n\nThere are situations where name/value pairs can be useful to indicate a configuration for a block similar to a section or page (via metadata). This should only be used in exceptional cases and largely for content that is not displayed as such, and is outside of the well established semantic model of document semantics. In section metadata processing the name value pairs get converted into data- attributes, which illustrates the model on how name/value pairs should be thought of in blocks that require configuration. Related to rule #1, it is definitely not recommended to map default content concepts and have name/value pairs for things like Heading, Image or Text.","lastModified":"1732118352","labs":""},{"path":"/docs/experimentation","title":"Experimentation","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Experimentation is the practice of making your site more effective by changing content or functionality, comparing the results with the prior version, and picking the ...","content":"style\ncontent\n\nExperimentation\n\nExperimentation is the practice of making your site more effective by changing content or functionality, comparing the results with the prior version, and picking the improvements that have measurable effects.\n\nWhen done right, it is a powerful pattern to improve conversions, engagement, and visitor experience. There are three main pitfalls to avoid when looking to adopt the practice:\n\nToo little: most companies are not experimenting enough, and when they do, they experiment with too little traffic to get meaningful results.\nToo slow: many experimentation frameworks slow the site down so much that the potential new conversions can’t make up for the lost traffic and bounces due to slow rendering\nToo complex: if it takes too much time to set up a new experiment, then fewer experiments will be run.\n\nFor sites running on Adobe Experience Manager, there is the experimentation plugin that allows developers to add an experimentation capability to their sites. Three things make this approach different:\n\nIt’s easy to set up tests in the tools your authors are already familiar with, no separate login is needed\nIt’s deeply integrated into the AEM delivery system, does not slow down your site and is resilient to changes in code and content\nIt allows testing simple content changes as well as experiments covering design, functionality, and code\n\nFollow the rest of this guide to set up your first experiment. There are a few terms that you will find used repeatedly:\n\nControl: the experience prior to running the experiment. Any experiment tries to prove an improvement over the control experience.\nChallenger: an experience that differs from the control experience and that may or may not have better results such as conversions\nVariants: control and challenger are all variants of an experiment\nSignificance: is your challenger really better than the control or is this just luck? Calculating significance allows you to rule out luck and concentrate on results that have a real effect\nExperiment Variants\n\nExperiment variants are the easiest way to get started with experimentation. This approach uses the page you already have as the control, you then create a challenger page that will replace the control for some of your visitors. In your challenger page, you can test different variants of your hero blocks, different page layout or call-to-action (CTA) placements and verbiage.\n\nAs long as you stay within the established design language of your website and use existing block functionality you should be able to set up an experiment variant and send it to production in a matter of a few minutes using your established authoring tools.\n\nExperiment Identifier\n\nEvery experiment should have its own identifier for tracking and analytics purposes.\nA good starting point is to come up with a good, unique identifier for your experiment, called the “Experiment ID”.\nExperiments are often numbered linearly or correlated to their Issue ID in an issue tracker or management system that is used. Experiment IDs often use a prefix for the project. Examples are OPT-0134, EXP0004 or CCX0076.\n\nCreate your Challenger Page\n\nBy convention it is recommended to create a folder with a lowercase experiment ID in your /experiments/ folder (for example /experiments/ccx0076/) which is where all the pages for your challenger variants live.\n\nYour experiments folder will look something like this:\n\nOnce the folder has been created, put a copy of your control page into that folder, and apply the changes on the page that you would like to test as part of your experiment variant. As an example let’s assume we have the following page on the website that we want to run an experiment on.\n\nYour copy of the challenger placed in the experiments/<experiment-id> folder might look like this:\n\nPreview and publish the challenger page using the sidekick and when you are done authoring the challenger page, the URL of the published challenger will be used in the next section - setting up the experiment.\n\nSet up your Experiment\n\nAs soon as you have your challengers ready to go, all you need to do is to go back to the control page and add some metadata indicating that this page is now part of a test.\n\nThere are two metadata rows that need to be added for an experiment variant.\n\nExperiment : containing your experiment ID\n\nExperiment Variants: containing URLs for all the challengers of this page, separated by line breaks if you have more than one challenger\n\nSee example below:\n\nFor an experiment variant the traffic split between all the variants (control + challengers) is automatically set to an even distribution. If you have one challenger, there will automatically be an even 50/50 split between control and the challenger. If you have 2 challengers, you will automatically see a third of the traffic allocated to control and each challenger and so on.\n\nYou can override the traffic split via metadata. For more information, see https://github.com/adobe/aem-experience-decisioning/wiki/Experiments#authoring\n\nPreview and Stage your Experiment Variants\n\nAs soon as you are ready to preview and stage your experiment you can preview the control page with the additional metadata. Whenever you are previewing a page that has a running experiment, you will see the experimentation overlay in your .hlx.page preview environment that lets you switch between the variants and gives you confidence that your test is setup correctly and ready to be launched.\n\nAuthors can get quick insights on the performance of experiments being run on the production site. These insights are helpful in making a decision about the duration of the experiment, but also about which variant is best suited for production.\n\nThe data collection to measure the effectiveness of each variant is based on Real User Monitoring.\n\nSend your Experiment Variant to Production\n\nTo send your experiment to production and collect data about the performance of each variant, the only step left is to publish the control page as well as each of the challenger pages.\n\nRunning Experiment Variants on Multiple Pages\n\nSome experiments include changes across multiple pages. If you would like to run experiments across multiple pages, all you need to make sure is to use the same experiment ID in metadata across multiple pages and in case you have multiple challengers, make sure that your challenger URLs in the Experiment Variants metadata field are sequenced corresponding to their challengers across pages.\n\nMake sure Experiment Variants are not indexed\n\nWhen running experiments, it is usually best practice to exclude the variants from the sitemap and ensure they are not indexed by search engines, as the page could be seen as duplicate content and negatively impact SEO.\n\nThis can be achieved by either of the following 2 methods:\n\nIf you centralize all experiments in a dedicated folder, like /experiments: make sure your bulk metadata.xlsx sheet contains a row with /experiments/** as path, and a robots column with the value noindex,nofollow.\nIf you keep the experiment control and variants with the regular content: add a robots entry in the page metadata for each variant, with the value noindex,nofollow.\nQuestions or Ideas?\n\nPlease contact the community via Discord.","lastModified":"1727967457","labs":"AEM Sites"},{"path":"/docs/exploring-blocks","title":"Exploring blocks","image":"/docs/media_17531c5817dba9e27ed6963d25d92986d72d70014.jpeg?width=1200&format=pjpg&optimize=medium","description":"Blocks are a foundational concept behind adding form and function to sections of a page. If you followed along with the tutorial, you will know ...","content":"style\ncontent\n\nExploring blocks\nIntroduction\n\nBlocks are a foundational concept behind adding form and function to sections of a page. If you followed along with the tutorial, you will know that you can create simple content structures with just text and images. When you want to group pieces of content, add a bit more structure, or add more complex functionality, blocks can help you achieve these goals.\n\nExample blocks ideas\n\nA block can really be anything you choose, but here are a few possible use cases:\n\nHero\nNavigation\nAccordion\nCarousel\nTab List\nHow they work\n\nBlocks are built using tables in Google Docs or Microsoft Word. These tables are converted to Markdown and rendered as simple divs when requesting HTML.\n\nTo create a block, you only need a table with at least one row & one column:\n\nThis will output in HTML as:\n\nhttps://gist.github.com/auniverseaway/11fd8e0d00038872e77c5f2e4ee9b1f1\n\nIn order to give some context to our block, we will add a name to the first row + column:\n\nThis will output in HTML as:\n\nhttps://gist.github.com/auniverseaway/280d80aef8c1a5a2937874933d84f9d5\n\nWe now have a CSS class that we can use to either style our block or attach functionality with Javascript. Of course, an empty div with a class is not tremendously useful.\n\nLet’s add a bit of content using additional rows and columns:\n\nEach accordion item will have a header with an image and text sitting below. I could have made this content a little simpler by joining the image and text columns, but I want to highlight a few important areas with how the HTML is rendered from this table. Let’s look at the markup:\n\nhttps://gist.github.com/auniverseaway/21b2db5b928b6014e9a5a485780038c4\n\nAs we saw before, we have a div with a class representing the entire table. There are a few other notable additions:\n\nEvery row after the first is represented by a div (Line 2).\nEvery column is also represented by a div (Line 7).\nIf you have multiple columns, but have merged cells, you will get an empty div (Line 4).\n\nOnce you have created this structure in your document, you will want to decorate and style it.\n\nDecorating blocks\n\nThe concept of decorating blocks is to perform several optional tasks:\n\nAdd additional classes or IDs to your markup.\nAdd any semantic elements you may want.\nAdd any aria or accessibility attributes you need.\nManipulate the DOM in a way to match your desired output.\n\nFor our use case, we’ll decorate the block to add a few classes to help with styling and add an event handler for interactivity:\n(Copy the following code to blocks/accordion/accordion.js)\n\nhttps://gist.github.com/auniverseaway/8e9d73d7df4e26abef151a9be390ccb6\n\nLet’s also add some CSS:\n(Copy the following code to blocks/accordion/accordion.css)\n\nhttps://gist.github.com/auniverseaway/a95567f5ff633b7803710d46c002f9ff\n\nThe end result should look something a bit like this:\n\nGuatemala Huehuetenango Finca Rosma\nA flavor compound of dark sugars, oatmeal-raisin cookies, a faint blackberry accent note and acidic impression.\nEthiopia Dry Process Dari Kidame\nFruits are well-integrated into the coffee's sweetness, and acidity is bright for a dry process. Date sugar, orange marmalade, mango, and aromatic cedar. Intense chocolate and a blueberry hint in this dark roast.\nColombia Buesaco Alianza Granjeros\nRaw demurara sugar sweetness, aromatic butterscotch, hints of apple, golden raisin, orange peel, and elegant citrus acidic impression. Full City adds in a ribbon of cacao/chocolate bar, rich bittersweetness. Good for espresso.\nConclusion\n\nAnd that’s it! You’ve taken a simple table in Docs or Word and made it a much more rich experience for your visitors.","lastModified":"1725864574","labs":""},{"path":"/docs/faq","title":"Frequently Asked Questions","image":"/docs/media_13872adbc8f226c65c00a81078b84ab4152476fc7.png?width=1200&format=pjpg&optimize=medium","description":"Franklin and Helix were internal project names for the Adobe Experience Manager engineering team’s initiative to develop an innovative publishing and delivery service situated at ...","content":"style\ncontent\n\nFrequently Asked Questions\nGeneral Questions\nWhy was this site renamed from Helix to Franklin to Edge Delivery Services?\n\nFranklin and Helix were internal project names for the Adobe Experience Manager engineering team’s initiative to develop an innovative publishing and delivery service situated at the edge. Between 2022 and 2023, Adobe worked with select clients to test these capabilities, focusing on integrating diverse content sources and delivering web experiences that surpass the performance of a significant majority.\n\nIn October 2023, Adobe officially introduced Edge Delivery Services and document-based authoring as part of Adobe Experience Manager, marking the transition from the Franklin and Helix project names to a fully integrated solution.\n\nid\nwhat-is-franklin\nWhat makes Edge Delivery Services websites so fast?\n\nEdge Delivery Services optimizes web performance through a combination of caching and real-time content rendering at the edge. To maintain high speed, it continuously monitors site performance using strictly necessary operational telemetry and enforces automated performance checks via Google PageSpeed Insights on every code update (pull request).\n\nWant to keep your site fast? Check out the Keeping It 100 guide.\n\nid\nfast\nIs Edge Delivery Services a Static Site Generator (SSG)?\n\nNo, Edge Delivery Services is not a Static Site Generator. Unlike traditional SSGs that require a full rebuild to update content, Edge Delivery Services dynamically renders and serves content at the edge, enabling instant updates without the need for a time-consuming build process.\n\nid\nssg\nIs Edge Delivery Services a Headless CMS, like AEM?\n\nEdge Delivery Services is not a Headless CMS but rather the “head,” leveraging a CMS or document-based authoring as the content source.\n\nWhile Edge Delivery Services primarily delivers fast-loading HTML experiences, it can also provide structured content in JSON format for applications that require it. Content authored in Edge Delivery Services can be aggregated into indices and delivered as JSON, making it compatible with headless implementations.\n\nid\nheadless\nWhat is a good Lighthouse score?\n\nEvery Edge Delivery Services site can and should achieve a Lighthouse score of 100. AEM’s strictly necessary operational telemetry provides insights into actual site performance over time, allowing you to assess whether your site continues to meet that score.\n\nWant to keep your Lighthouse score at 100? Check out the Keeping It 100 guide.\n\nid\nlighthouse\nWhat is the difference between aem.page and aem.live?\n\naem.page provides an up-to-date preview of unpublished content. aem.live serves published content and is used as the origin for your production CDN.\n\nPreviously, these domains were known as hlx.page and hlx.live.\n\nTake a look at our architecture diagram for more information on how content gets published.\n\nid\npage-vs-live\nWhich browsers are supported?\n\nEdge Delivery Services supports all modern browsers, including Google Chrome, Apple Safari, and Microsoft Edge. Internet Explorer (IE) is not supported.\n\nid\nbrowser-support\nHow much does Edge Delivery Services cost?\n\nYour Adobe account team can provide licensing estimates based on your target site’s needs and traffic expectations, as well as information on available trial license opportunities.\n\nGet started for free by following the Getting Started: Developer Tutorial.\n\nid\ncost\nDoes Edge Delivery Services require an Enterprise license?\n\nEdge Delivery Services is part of AEM Sites and requires an AEM Sites license. Development access to the service does not require a separate license and is available for free.\n\nid\nlicensing\nCan I run Edge Delivery Services without full AEM Sites?\n\nYes, Edge Delivery Services can run independently without requiring an AEM Sites author or publish instance. If you want to use AEM authoring, an AEM Sites author instance is required.\n\nid\naem-sites\nHow can we communicate with the AEM engineering team?\n\nThe AEM team is available for support and collaboration through both Slack and Microsoft Teams.\n\nid\ncommunication-channel\nWhere can I learn more?\n\nConnect with the AEM Community on Discord or join an Adobe User Group to learn and build connections with fellow practitioners. For more information, visit our homepage at aem.live or the Experience Manager documentation.\n\nid\nlearn-more\nUse Cases\nIs Edge Delivery Services better for large or small sites?\n\nEdge Delivery Services is a great solution for both small and large sites. Small sites benefit from its ease of setup, while large sites can take advantage of its ability to support many authors, frequent updates, and high traffic.\n\nid\nsite-size\nHow big are the biggest Edge Delivery Services sites?\n\nThe biggest sites have over 100k pages and hundreds of authors.\n\nid\nbig\nIs Edge Delivery Services a good solution for landing pages?\n\nYes, Edge Delivery Services is well-suited for landing pages. Adobe uses pages.adobe.com to manage hundreds of landing pages, each owned and maintained by independent teams.\n\nid\nlanding\nIs Edge Delivery Services a good fit for sites with dozens or hundreds of local markets, languages, or sub-brands?\n\nYes, Edge Delivery Services is a great solution for global enterprises managing multiple markets, languages, and sub-brands. It supports centralized content management and inheritance, enabling global updates while allowing local teams to make adjustments. Content rollout and localization can be integrated with existing translation memory systems in content sources like SharePoint and Google Drive.\n\nLearn more on the Translation and Localization page.\n\nid\nglobal-enterprise\nIs Edge Delivery Services a suitable platform for small e-commerce websites?\n\nYes, Edge Delivery Services can be used for e-commerce websites. See Commerce Storefront.\n\nid\ne-commerce\nCan Edge Delivery Services generate content for external platforms like GitBook?\n\nYes, but Edge Delivery Services no longer natively supports Markdown documents in GitHub as a content source. To publish content from GitHub, Markdown documents must first be copied to a supported authoring environment where they can be previewed and published.\n\nid\nmarkdown-content\nDoes Edge Delivery Services support content approval workflows?\n\nEdge Delivery Services does not provide built-in content approval workflows, but instead delegates workflow management to the content source.\n\nFor document review and approval processes within SharePoint or Google Drive, we recommend Adobe Workfront.\n\nid\nworkflows\nSite Configuration and Management\nHow can I manage multiple subdomains?\n\nEdge Delivery Services allows customers to use their own CDN to deliver content under their own domains or subdomains (BYO Production CDN).\n\nSites can be created with different code repositories or via repoless and point the site configuration to different CDN endpoints for Push Invalidation.\n\nLearn more on the BYO CDN Setup page.\n\nid\nsub-domain\nHow does Edge Delivery Services handle inheritance and rollouts across multiple sites?\n\nEdge Delivery Services has a broad range of facilities for centralized content and code management, allowing updates to be shared across multiple sites while still allowing for localized adjustments. AEM Universal Editor or Document Authoring have out of the box solutions for existing inheritance and content rollouts.\n\nid\nmsm\nCan Edge Delivery Services be used along other CMS platforms on the same site?\n\nYes, Edge Delivery Services can be used alongside other CMS platforms when managed at the CDN tier. Many large, mature sites combine content from multiple origins, allowing different sections to be powered by separate AEM projects or external CMS solutions.\n\nIt helps to have top-level paths (sections of a site) be delivered from the same CMS so that CDN configurations will be easier to manage as opposed to a random list of URLs with no common parent in the URL.\n\nid\nmulti-origin\nCan Edge Delivery Services support secure content access for intranets, portals, or closed user groups?\n\nYes, Edge Delivery Services can support secure content access for intranets, portals, and closed user groups. Customer-specific authentication and authorization are typically enforced at the CDN tier, where all CDN vendors offer a broad set of integrations with existing identity providers.\n\nIn Edge Delivery Services, access to .page and .live origins can be restricted (independently), ensuring that content is only accessible through the CDN with the necessary authentication in place.\n\nLearn more on the Configuring Site Authentication page.\n\nid\ncug\nHow does Edge Delivery Services handle access control and multi-tenant authoring?\n\nEdge Delivery Services follows the access control model of its connected content source. For example, SharePoint and Google Drive provide robust enterprise-level permissions that users and system administrators are already familiar with. For partial page access control, fragments are recommended to isolate content with different access control into separate documents. Separate teams can control their own content within their assigned folders, ensuring isolated workflows.\n\nAccess to specific blocks can be restricted (via the content library) and blocks can also be configured to prevent rendering in specific sections of a site.\n\nid\naccess-control\nWhat is the best way to fetch backend data?\n\nThe client (browser) requests data from backend systems. If authentication is required, a middleware layer running on the Edge/CDN (e.g., Edge Workers) acts as an intermediary, managing authentication and facilitating communication between the client and backend.\n\nBrowser → Middleware (Edge Worker) → Backend\n\nid\nbackend-data\nDoes Edge Delivery Services support server-side customizations or includes (SSI/ESI)?\n\nEdge Delivery Services renders semantic markup (excluding fragments) on the server-side, but it does not support any server-side customizations or includes. Instead, it generates optimized, static markup that can be adjusted on the client side for styling, functionality, and personalization. While some customer CDNs support ESI, it generally introduces performance and complexity trade-offs and we have found no benefit to metrics like Largest Contentful Paint (LCP) or Cumulative Layout Shift (CLS) when using ESI over client-side includes. Because of this, we do not recommend using ESI and advise against it.\n\nid\nssi-esi\nHow does Edge Delivery Services support taxonomy management?\n\nEdge Delivery Services supports taxonomy management through structured metadata and hierarchical tagging. Common approaches include managing taxonomies in spreadsheets or documents to define hierarchical structures and adding tags to metadata of pages or content fragments.\n\nExternal taxonomy systems or APIs can be embedded into the authoring environment as a Sidekick plugin. AI-based tagging can be implemented as an authoring plugin or directly in the delivery tier, depending on the use case.\n\nid\ntaxonomy\nHow does Edge Delivery Services handle caching and content invalidation?\n\nEdge Delivery Services uses caching to optimize performance while ensuring content updates are reflected quickly. When authors publish changes, the system automatically, and surgically purges cached content at multiple levels, including the CDN.\n\nFor BYO CDNs (including Cloudflare, Fastly, Akamai, and CloudFront), push invalidation can be enabled to purge content by URL and cache key whenever updates are published. Learn more on the Configuring Push Invalidation page.\n\nid\ncaching\nHow can I confirm that a push invalidation token is working?\n\nYou can use the CDN Setup tool to confirm that your CDN automatically purges outdated content when content changes are published. This tool validates your project’s vendor-specific properties and credentials.\n\nid\nvalidate-token\nHow does the development flow look like? Is it independent of the authoring flow?\n\nThe development flow can be completely separated from the authoring flow. However, it's important to note that AEM allows parallelizing content authoring and development and thereby significantly reduces project durations. Authors and developers are encouraged to collaborate closely to ensure an optimal outcome.\n\nSee Best Practices and Anatomy of a Project for more details.\n\nid\ndev-flow\nWhat content should be created by an author versus a developer?\n\nAuthors create content in the chosen authoring environment, while developers build site functionality using CSS and JavaScript in GitHub.\n\nAuthors and developers also create work in progress content for new features in /drafts/ folder for testing, training and development.\n\nid\ncontent-authoring\nIs Edge Delivery Services down right now?\n\nEdge Delivery Services is probably not down, but you can always check Adobe’s status page for real-time service updates and interruption notifications.\n\nid\ndown-for-everyone-or-just-me\nIndexing and SEO\nWhat are the best practices for search engine optimization (SEO)?\n\nAdd metadata to your pages using the metadata block and metadata sheet to improve indexing. Ensure your site achieves strong Core Web Vitals (CWV) scores for fast loading and a smooth user experience. Use clear headings, alt text for images, and descriptive URLs.\n\nid\nseo\nHow do I optimize Edge Delivery Services pages for social media sharing on platforms like Facebook and Twitter?\n\nSet a default placeholder image to ensure consistent previews. Use the metadata block and metadata sheet to define the title, description, and other page metadata.\n\nid\nsocial-share\nCan I retrieve a list of pages within a folder?\n\nYes, Edge Delivery Services can generate an index (or multiple indices) of pages within a folder, storing the data in a spreadsheet and serving it as JSON.\n\nLearn more on the Indexing page.\n\nid\npage-list\nHow does Google index Edge Delivery Services pages?\n\nSearch engines, like Google, index Edge Delivery Services pages the same way they index traditional web pages by crawling the published content and evaluating metadata, page structure, and performance metrics like Core Web Vitals.\n\nid\nsearch-engine-indexing\nCan Google index Edge Delivery Services pages that load content fragments dynamically?\n\nYes, Google can index Edge Delivery Services pages that load content fragments dynamically via the fetch API. However, to avoid duplicate indexing issues, it is recommended to disable indexing for fragment URLs using the noindex directive in metadata.\n\nid\nfragment-indexing\nHow should additional fields be queried when all pages are stored in a single index?\n\nEdge Delivery Services supports querying specific fields within a single index and suggests creating additional indexes when needed. Best practices include creating one index per language and additional dedicated indexes for common queries.\n\nid\nquery-index\nHow can I include specific sections of a site in query-index.json for indexing?\n\nEdge Delivery Services supports indexing parts of the site by configuring an index at that path of the content or via a custom index configuration.\n\nLearn more about setting up index definitions on the Indexing page.\n\nid\nsection-index\nHow do I use selectors for indexing content within the decorated DOM?\n\nThe indexing service captures the non-decorated DOM from the initial page load, before JavaScript executes. Content added dynamically won’t be indexed by default, but can be included by combining it with the indexing sheet via formulas or API calls depending on where the data needs to surface.\n\nid\nindex-selectors\nHow do I prevent a page from being indexed?\n\nTo prevent individual pages from being indexed, add the noindex directive in the metadata block. To exclude larger site sections, configure exclusions in the bulk metadata sheet.\n\nid\nno-index\nWhat strategies should I use to manage URLs and prevent SEO ranking loss?\n\nMaintaining existing URL structure is the best way to prevent ranking loss. If simplifying or modernizing URLs during a site rebuild, use redirects to guide search engines and users to the new URLs. Temporary ranking fluctuations may occur until search engines reindex the updated content and recognize the new URLs as stable.\n\nid\nseo-loss\nDoes Edge Delivery Services support RSS feeds?\n\nEdge Delivery Services does not have built-in RSS or Atom feed publishing. However, the Sidekick is extensible and it can be configured to generate Atom feeds for RSS readers from a page index.\n\nid\nrss\nLocalization\nWhat localization (l10n) capabilities does Edge Delivery Services support?\n\nEdge Delivery Services supports localization by leveraging the built-in translation tools of your chosen content source. This allows customers to use familiar systems for content authoring and translation.\n\nLearn more on the Translation and Localization page.\n\nid\ntranslation\nWhat are the best practices for handling localized or country-specific sites?\n\nOrganize content into different folders for each language version. Use translation tools integrated with your authoring environment. Create language-specific indices. Use placeholders for different languages.\n\nid\ni18n\nDoes Edge Delivery Services support HREFLang?\n\nYes, Edge Delivery Services supports HREFLang as part of sitemaps (sitemap.xml). It can be configured to reference different locales where needed.\n\nLearn more on the Sitemaps: Multiple Sitemaps page.\n\nid\nhreflang\nPerformance and Monitoring\nWhat is Operational Telemetry?\n\nOperational Telemetry is a service that measures how fast your site loads for actual visitors, what errors users experience, and where interactions are broken. Unlike lab-based tools like Lighthouse, Operational telemetry provides more accurate real-world performance data.\n\nid\nrum\nHow do I enable Operational Telemetry on my site?\n\nTo enable operational telemetry, contact the AEM engineering team. They will open a pull request (PR) with the necessary JavaScript. All operational telemetry data is GDPR-compliant and does not collect personally identifiable information (PII).\n\nLearn more on the Developing Real Use Monitoring page.\n\nid\nenable-rum\nHow can I track page KPIs such as completions, clicks, and impressions?\n\nEdge Delivery Services supports tracking page KPIs through operational telemetry. Operational telemetry collection can be extended with custom checkpoints to track specific conversion events. Note that operational telemetry only samples visitor interactions and does not track individual users.\n\nLearn more on the Developing Real Use Monitoring: Checkpoints page.\n\nid\nkpis\nAuthoring Content\nWhat are the differences between different authoring modes, like Universal Editor and Document Authoring?\n\nFill out the Where To Author Your Site questionnaire and see which option fits your needs best.\n\nEdge Delivery Services is agnostic to whichever authoring source you would like to use, but we recommend picking the right one for the actual authors who need to update content every day.\n\nid\nauthoring-environments\nWhere can I find the Edge Delivery Services URL of my document?\n\nTo find the URL of your document, open the Sidekick in your content editor and select “Preview”. This will take you to the preview URL of your page.\n\nid\npreview-url\nHow do I edit pages?\n\nEdge Delivery Services supports content editing in Microsoft Word and Excel, Google Docs and Sheets, AEM’s Universal Editor, as well as custom authoring environments.\n\nLearn more on the Authoring Content page.\n\nid\nedit-pages\nHow do I publish a page?\n\nTo publish a page in Edge Delivery Services, open the Sidekick on the preview URL of your page and select “Publish”. This will make the page publicly available on your site.\n\nLearn more on the Preview and Publish Content page.\n\nid\npublish-page\nWhy are my changes in Microsoft Word not appearing on my site after previewing or publishing?\n\nThe SharePoint API can sometimes introduce a delay of up to three minutes before changes made in Microsoft Word become visible in Edge Delivery Services. To ensure changes appear immediately, manually save the document using CMD+S (Mac) or CTRL+S (Windows) before previewing and publishing.\n\nYou can also click on ‘Update’ in the Sidekick on the Preview page to get the latest saved content.\n\nid\ncontent-delay\nHow do I unpublish a page?\n\nTo unpublish a page, first delete the corresponding source document. Then, open the Sidekick on the preview site. The “Unpublish” button will now appear, allowing you to remove the page from the public site while keeping it available in preview.\n\nLearn more on the Authoring and Publishing Content page.\n\nid\nunpublish-page\nHow do I delete a page?\n\nTo delete a page, first delete the corresponding source document. Then, open the Sidekick on the preview site. The “Delete” button will now appear, allowing you to remove the page from both the public site and the preview environment.\n\nThe “Unpublish” and “Delete” buttons only appear after the source document has been deleted, ensuring that pages are not removed accidentally.\n\nLearn more on the Authoring and Publishing Content page.\n\nid\ndelete-page\nWhy does my new document return a “404 Not Found” error?\n\nNew documents do not automatically generate a preview URL. To create one, open the Sidekick in the editor and click “Preview”.\n\nid\nerror-404\nHow do I schedule content publication in Edge Delivery Services?\n\nEdge Delivery Services allows you to schedule API-driven publication using spreadsheets. For large-scale launches, such as a rebrand or product launch, best practices include using GitHub branches or forks and copies of large content folders to coordinate and time content changes effectively.\n\nLearn more on the Scheduling page.\n\nid\nlaunches\nWhat if I have a big release of content that needs to be published at the same time?\n\nEdge Delivery Services has APIs to support the concept of Snapshots that allow customers to Add/Remove individual pages to a collection called a “Snapshot Manifest”. The Snapshot can then be reviewed and approved/rejected. Publishing a Snapshot will take everything from the Snapshot Manifest and publish it at the same time immediately. You can use the Snapshot Admin tool to manage Snapshots.\n\nid\nsnapshot\nWhat should I do if my content seems outdated?\n\nFirst, ensure that any recent changes to the content have been saved in your authoring environment. Then, open the Sidekick and select “Preview” to refresh the preview of your page. You can also click on ‘Update’ on the Sidekick if you are in the Preview URL\n\nid\noutdated-content\nHow does Edge Delivery Services generate URLs? What characters are allowed?\n\nEdge Delivery Services constructs page URLs based on document and folder names. To maintain clean and easy-to-type URLs, only lowercase letters (a-z), numbers (0-9), and dashes (-) are allowed. Unsupported characters are automatically transformed.\n\nLearn more on the Document Naming page.\n\nid\nfilenames\nDoes Edge Delivery Services have a WYSIWYG editor for in-browser editing?\n\nEdge Delivery Services supports multiple authoring environments, including AEM Universal Editor, Adobe Document Authoring, and commonly used existing tools like Microsoft Word, Excel, Google Docs, and Google Sheets.\n\nid\nwysiwyg\nDoes Edge Delivery Services support inline text styling within a paragraph?\n\nEdge Delivery Services supports inline text styling with semantic formatting: bold, italic, underline, strikethrough, subscript, superscript, and code. Colors and font settings (like font face and size) must be defined explicitly in CSS.\n\nid\ninline-styling\nHow do I set an image size?\n\nEdge Delivery Services automatically optimizes image sizes, so authors do not need to manually set dimensions. Developers can use existing helper functions to get the appropriate size of the image they want to use.\n\nid\nimage-size\nHow do I add alt text to images?\n\nIn Word, select the image, go to the “Picture” ribbon, and choose “Alt Text.” In Google Docs, right-click the image and select “Alt Text.”\n\nOther authoring environments may have different interfaces for adding alt text. It’s important to ensure alt text is properly set wherever content is authored to improve accessibility and SEO of your page.\n\nid\nalt-text\nCan a metadata block be used as the first block in a document?\n\nYes, a metadata block can be placed anywhere in a document, including as the first block.\n\nid\nmetadata-placement\nWhat is the metadata spreadsheet and how is it used?\n\nThe metadata spreadsheet is a centralized sheet for managing site-wide metadata for SEO. It allows you to set default metadata, exclude specific page sections from indexing, and more.\n\nLearn more on the Bulk Metadata page.\n\nid\nmetadata-sheet\nHow do I add name anchors to headlines in pages?\n\nEdge Delivery Services automatically generates id attributes for all headings. For example, a heading titled “This is a Title” will be accessible via the URL fragment #this-is-a-title, eliminating the need to manually add anchors.\n\nid\nanchors\nWhat do the three dashes (---) in a document do?\n\nThree dashes indicate a section break. Sections can be useful in structuring pages, applying different background styles, or triggering animations.\n\nid\nthree-dashes\nDoes Edge Delivery Services support videos as source content?\n\nYes, short videos can be uploaded to your authoring environment. After previewing and publishing the video with Sidekick, a URL will be generated that can be referenced in the source document.\n\nid\nvideo-content\nShould animated GIFs or MP4 files be used for short video content?\n\nMP4 is recommended for short videos and animations because it provides smaller file sizes, broader compatibility, and a more predictable authoring experience (while Google Docs displays animated GIFs inline, Microsoft Word only shows the first frame though the full animation is still hosted).\n\nid\ngifs\nCan I use embeds or iframes?\n\nYes, the embed block supports embedding content from external sources, including iframes, such as YouTube videos.\n\nid\nembeds\nHow can I manage editable content templates?\n\nContent templates can be managed using the templating mechanisms available in your chosen authoring environment. For example, Microsoft Word and Google Docs provide a broad range of built-in template functionality.\n\nIn any environment, you can create example documents (such as article pages, landing pages, or two-column layouts) that authors can duplicate and modify as needed.\n\nid\npage-templates\nHow can I compare different versions of a page?\n\nVersion comparison is managed through the versioning features available in your chosen authoring environment. For example, Microsoft SharePoint and Word provide built-in version tracking and document comparison tools.\n\nMany external tools, such as DiffSite, can also be used to compare differences between the preview and publish states of a page.\n\nid\ncompare-documents\nDoes Edge Delivery Services offer content reporting or a search-and-replace feature?\n\nEdge Delivery Services has APIs and tools like Image Audit or Page Status to help with common content reporting needs in addition to what the source content repositories provide.\n\nWhile Search and Replace is easy to do in individual documents, to do it across multiple documents can be done using either the AEM Importer or other custom scripts using libraries to bulk update Word or Google documents.\n\nid\nsearch-and-replace\nHow are internal and external links handled in Edge Delivery Services?\n\nInternal links to .page and .live URLs are automatically converted into relative links to ensure proper navigation between pages. External links to third-party sites remain unchanged and function as standard external links.\n\nid\nlinks\nHow are links to documents (content source) handled?\n\nBy default, links to documents point to their original locations in the content source. If a document is published as a page on the site, its URL will match its folder structure from the source (unless a redirect is configured). For authors, it is often more intuitive to use absolute links copied and pasted from the browser.\n\nid\ndoc-links\nSidekick\nWhat is the AEM Sidekick?\n\nThe AEM Sidekick is a browser extension that provides editing, previewing, and publishing capabilities for your Edge Delivery Services site. It can support multiple projects.\n\nid\nsidekick\nHow do I install the Sidekick?\n\nThe Sidekick is available for Chrome-based browsers (including Microsoft Edge) from the Chrome web store.\n\nid\ninstall-sidekick\nWhat should I do if my Sidekick isn’t working?\n\nIf your Sidekick is not functioning, we recommend you reinstall the Sidekick for your project by visiting the AEM Sidekick Configurator.\n\nid\nsidekick-error\nWith a Dynamic Media license, can authors select different asset variations in the Sidekick?\n\nCurrently, the Asset Selector Sidekick Plugin does not provide an option to pick a variation. If the customer has a Dynamic Media (DM) license with DM Open API access, they can configure the Asset Selector to copy the DM asset link as a reference. Authors can then modify the copied DM URL by appending query parameters to retrieve different asset variations.\n\nid\ndm-sidekick\nDevelopment and Deployment\nHow do I set up a local development environment?\n\nTo develop Edge Delivery Services pages locally, install Node.js and run npm install -g @adobe/aem-cli && aem up. This starts a local server that updates with code changes in real time and uses production content for accurate previews.\n\nFor a full guide to get started, visit the Developer Tutorial.\n\nid\ndevelop-locally\nHow do I create a testing or staging environment?\n\nEdge Delivery Services is fully serverless, eliminating the need for dedicated environments. To try new functionality, simply create a branch in your GitHub repository.\n\nLearn more on the Staging & Environments page.\n\nid\nstaging\nHow does Edge Delivery Services handle concurrent code editing by multiple developers?\n\nAs long as you are developing locally, your code remains on your machine and only content is shared. When you commit code to a GitHub repository, Git manages version control and merges changes from multiple developers.\n\nid\nconcurrent-development\nDoes Edge Delivery Services support private GitHub repositories?\n\nYes, Edge Delivery Services supports private GitHub repositories. As with public repositories, The AEM Code Sync bot needs to be installed on the repository.\n\nid\nprivate-github\nCan Edge Delivery Services integrate with GitHub Enterprise?\n\nEdge Delivery Services supports GitHub Enterprise Cloud but does not support GitHub Enterprise Server (because Edge Delivery Services requires access to a public GitHub API).\n\nid\ngithub-enterprise\nDoes Edge Delivery Services support alternative Git repository hosts, like Bitbucket?\n\nA GitHub repository is mandatory. However, since Git is a distributed version control system (DVCS), you can host your codebase on other platforms (like Bitbucket) and push branches to GitHub for deployment.\n\nNote: There is now the new BYOGIT feature as early access technology. It allows to directly use a bitbucket, gitlab and soon azure repos repository directly with EDS. see https://www.aem.live/developer/byo-git\n\nid\ngithub\nWhat is fstab.yaml?\n\nfstab.yaml is a configuration file that defines the content source for your site. It specifies a mountpoint using sharing URLs from your authoring environment, determining where Edge Delivery Services pulls content from. Its format is similar to an fstab file in UNIX.\n\nEdge Delivery Services will always reference the fstab.yaml from your main branch.\n\nWith Helix 5, fstab.yaml is no longer required. Content sources can now be configured via the Configuration Service API, allowing the same code repository can be reused across multiple sites in a repoless fashion.\n\nid\nfstab\nDoes Edge Delivery Services support multiple content mountpoints in fstab.yaml?\n\nNo, each project can have only one content source in both fstab.yaml or the Configuration Service.\n\nid\nmulti-mountpoint\nHow does code move from .page to .live?\n\nWhen code is pushed to any branch in GitHub, the AEM Code Sync app automatically syncs it to the Codebus, making it available on both .page and .live.\n\nid\npage-to-live\nHow is continuous integration and deployment (CI/CD) configured?\n\nEdge Delivery Services works in a scaled trunk-based development model. Since there is no build process required, it makes developer velocity much faster. Customers can add their internal processes via Github Actions and Workflows to also trigger other processes and build checks needed during Pull Requests.\n\nHow does Edge Delivery Services handle caching for head.html updates? Do I need to clear the cache manually?\n\nWhen you update head.html, Edge Delivery Services automatically purges the cache for all HTML pages, ensuring that changes take effect immediately. Manual cache clearing is not required.\n\nid\nhead-cache\nDo I need to protect against DOS attacks with a WAF on our CDN?\n\nEdge Delivery Services origins are designed to withstand common internet attacks, including typical scripted and DOS attacks. The AEM security and operations team continuously implements countermeasures to mitigate risks. Because your CDN may also connect to other origins beyond Edge Delivery Services, you may still want to use WAF services to protect those additional origins.\n\nLearn more on the Security page.\n\nid\nwaf\nShould redirects be managed on the CDN, or does Edge Delivery Services handle them?\n\nEdge Delivery Services supports redirect management through a “redirects” spreadsheet stored in the root folder of your project in your authoring environment. This allows for centralized control of URL redirects without requiring CDN-level configuration.\n\nid\nredirects\nWhy do content-based assets return a 301 redirect to a media_ image?\n\nWhen an asset is previewed, it is uploaded to the media bus, which stores it immutably. To maintain a human-readable filename, the asset can still be addressed by its original name. During delivery, the request is redirected (via 301) to the immutable media bus URL for retrieval.\n\nid\nmedia-301\nCan I configure redirects to return a 302 (temporarily moved) status code?\n\nEdge Delivery Services only supports 301 (permanently moved) redirects through the redirects spreadsheet file to optimize caching. Other kinds of redirects must be configured at the CDN level.\n\nFrom an SEO perspective, 301 redirects are generally considered safe as the original URL's SEO value is preserved. On the other hand, failure to maintain a 302 redirect only temporarily can be risky as search engines might de-index the original page and fail to transfer the SEO value to the new URL.\n\nid\nredirect-302\nWhat is the character limit for a branch/subdomain?\n\nEdge Delivery Services subdomains follow the format branch--repo--owner. The combined length of the branch, repository, and owner name (including the two required separators [--]) cannot exceed 63 characters. This is a DNS limitation and is not specific to Edge Delivery Services.\n\nFor more details, see the DNS size limits section in RFC 1035.\n\nid\ncharacter-limit\nWhat is the policy on using cookies or localStorage to store application state?\n\nEdge Delivery Services does not process cookies on the server side. We recommend using URL parameters to store application state.\n\nid\nstate\nContent Blocks\nDoes Edge Delivery Services have something like components?\n\nYes, Edge Delivery Services uses “blocks” as our components. A block is a reusable section of a page. By default, Edge Delivery Services displays document content as-is, but blocks allow for specific functionality and styling.\n\nid\nblocks\nHow do I create a block?\n\nTo create a block, place your content inside a table with a header row containing the block name. Edge Delivery Services will recognize this structure and apply the corresponding functionality or styling.\n\nLearn more on the Exploring Blocks: How They Work page.\n\nid\ncreate-a-block\nWhere can I find a list of available blocks?\n\nAvailable blocks vary by project. Many projects maintain a block inventory (or “kitchen sink”) page that showcases the available blocks and their structure.\n\nEdge Delivery Services boiler plate code includes a set of commonly used blocks. Apart from that there is Block Collection boilerplate that includes additional blocks .\n\nLearn more about Block Collection\n\nid\nkitchen-sink\nDoes Edge Delivery Services support nested blocks?\n\nNo, Edge Delivery Services does not support nested blocks to keep authoring simple and manageable. However, nested structures can be achieved in other author-friendly ways. For example, nested fragments can be used for more complex layouts.\n\nid\nnested-blocks\nHow can I implement a design system?\n\nTo implement a design system in Edge Delivery Services, leverage (or expand) the existing CSS variables for design tokens and ensure blocks are reusable across pages.\n\nid\ndesign-system\nHow can I reuse content across multiple pages in Edge Delivery Services, like AEM’s Content and Experience Fragments?\n\nEdge Delivery Services supports centrally managed content reuse through the fragment block, which embeds content from one page into another. Edge Delivery Services loads headers and footers as fragments by default.\n\nid\nfragments\nWhat is the equivalent of Core Components in Edge Delivery Services?\n\nThe Block Collection serves as the equivalent of Core Components in Edge Delivery Services. The collection provides a set of pre-built, extensible blocks designed for modern websites, providing an easily customizable foundation for any project.\n\nid\ncore-components\nHow do blocks compare to Web Components?\n\nBlocks are a core feature of Edge Delivery Services, integrating seamlessly with minimal setup. They are easy to author, optimized for performance, and are built using standard HTML, CSS, and JavaScript.\n\nWeb Components provide encapsulation and cross-platform reusability, making them valuable for design systems. However, they require careful lifecycle and performance management to avoid unnecessary overhead.\n\nid\nblocks-vs-web-components\nFrameworks and Web Technologies\nAre Web Components recommended for Edge Delivery Services projects?\n\nWeb Components can be used in Edge Delivery Services projects, but they are not the default recommendation. While they offer modularity and reusability, they require careful optimization to avoid impacting performance.\n\nLearn more on the Web Components page.\n\nid\nweb-components\nCan I integrate Web Components from a design system into Edge Delivery Services?\n\nYes, Edge Delivery Services supports integrating Web Components from design systems. Web Components should be initialized within blocks and only loaded when needed to avoid unnecessary overhead.\n\nid\nintegrating-web-components\nCan I use JavaScript frameworks?\n\nEdge Delivery Services does not require a specific frontend framework, but supports integration with frameworks like React, Angular, Vue, and Svelte. The recommended approach is to use technologies such as React Portals or Web Components within application-centric blocks while keeping simpler blocks in plain CSS and JavaScript for optimal web performance.\n\nWhen loading external components that are render-critical for the largest contentful paint (LCP), make sure to load them from the same host as the main website, so you avoid the performance penalty of a second DNS lookup and TLS handshake. This can be accomplished by mapping the path in the CDN or pushing the required components into the site's git repository.\n\nid\nframeworks\nCan I use CSS frameworks?\n\nYes, CSS frameworks like Less, PostCSS, and TailwindCSS can be used in Edge Delivery Services. CSS frameworks should be used thoughtfully, balancing page speed with layout shifts while keeping styling readable and maintainable. Usually, using any framework also adds a build step that can slow down development velocity and is not as instantaneous anymore which is a big advantage of going without a framework.\n\nid\ncss-frameworks\nDoes Edge Delivery Services support server-side rendering (SSR) for Lit-based Web Components?\n\nNo, Edge Delivery Services does not support any server-side customizations. Instead, it generates optimized, semantic markup that can be decorated on the client side for styling, functionality, and personalization. This ensures flexibility while maintaining optimized performance.\n\nid\nserver-side-rendering\nIntegrations and Third-Party Tools\nWhat are the performance considerations when integrating with third-party tools?\n\nIntegrating third-party tools can impact performance, especially if they block rendering or delay LCP (Largest Contentful Paint). Avoid adding scripts in <head.html> to keep the critical rendering path clear. Instead, load scripts only when needed using loadScript() within specific blocks or IntersectionObserver to defer execution. Defer non-essential integrations and tools, such as analytics or tag managers, until after page load to reduce Total Blocking Time (TBT) and improve user experience.\n\nWant to maintain high performance? Check out the Keeping It 100 guide.\n\nid\nthird-party-performance\nCan I integrate with AEM Sites?\n\nYes, Edge Delivery Services can be seamlessly integrated. Edge Delivery Services is a part of AEM Sites!\n\nid\nsites\nCan I use Adobe Target?\n\nYes, Edge Delivery Services is compatible with Adobe Target. Learn more at the Configuring Adobe Target Integration page.\n\nAdditionally, Edge Delivery Services features a built-in experimentation framework that enables quick test creation, execution without performance impact, and fast deployment of test winners.\n\nid\nadobe-target\nCan I use Adobe Analytics?\n\nYes, Adobe Analytics can be integrated with Edge Delivery Services just like on any other website.\n\nid\nadobe-analytics\nCan I use Adobe Launch?\n\nYes, Adobe Launch can be integrated with Edge Delivery Services just like on any other website.\n\nid\nadobe-launch\nCan I use AEM Forms?\n\nYes, with a valid AEM Forms license, Edge Delivery Services supports AEM Forms through the Adaptive Forms Block, allowing you to create, style, and manage forms within your site.\n\nLearn more on the Forms page.\n\nid\nforms\nCan I use Marketo forms?\n\nYes, Marketo forms can be embedded in Edge Delivery Services pages using the Marketo Forms API or iframe embed codes.\n\nid\nmarketo\nIs there a default forms capability that can send email on submission?\n\nNo, Edge Delivery Services does not include a default forms capability that sends emails on submission. However, you can integrate services like AEM Forms, Adobe Campaign, Workfront, or external email APIs to handle form submissions and email notifications.\n\nid\nforms-email\nCan I use AEM Assets?\n\nYes, AEM Assets can be used in Edge Delivery Services by configuring the AEM Assets Sidekick plugin. This allows authors to access and use AEM Assets within SharePoint or Google Docs.\n\nid\naem-assets\nDoes Edge Delivery Services support Cloudflare?\n\nYes, you can run a Cloudflare CDN in front of Edge Delivery Services to add a custom domain or integrate Edge Delivery Services into an existing site.\n\nLearn more on the Cloudflare Setup page.\n\nid\ncloudflare-cdn\nDoes AEM support Fastly?\n\nYes, you can run a Fastly CDN in front of Edge Delivery Services to add a custom domain or integrate Edge Delivery Services into an existing site.\n\nLearn more on the Fastly Setup page.\n\nid\nfastly-cdn\nDoes AEM support Cloudfront?\n\nYes, you can run a Cloudfront CDN in front of Edge Delivery Services to add a custom domain or integrate Edge Delivery Services into an existing site.\n\nLearn more on the Amazon Web Services (AWS) CloudFront Setup page.\n\nid\ncloudfront-cdn\nDoes Edge Delivery Services support Akamai?\n\nYes, you can run an Akamai CDN in front of Edge Delivery Services to add a custom domain or integrate Edge Delivery Services into an existing site.\n\nLearn more on the Akamai Setup page.\n\nid\nakmai-cdn\nCan I use Google Tag Manager?\n\nYes, Google Tag Manager can be integrated with Edge Delivery Services just like on any other website.\n\nid\ngoogle-tag-manager\nCan I integrate with OneTrust for consent management?\n\nYes, OneTrust can be integrated with Edge Delivery Services just like on any other website.\n\nid\nonetrust\nCan I integrate translation tools?\n\nYes, third-party translation tools can be integrated into Edge Delivery Services. While Edge Delivery Services natively supports translation features within content sources, additional translation services can be incorporated at the content source or CDN tier for more advanced localization needs.\n\nid\ntranslation-integration\nCan I integrate a third-party search solution?\n\nYes, integrating a third-party site search is common for large websites. The built-in sitemap feature is typically used to facilitate this integration. Edge Delivery Services includes out-of-the-box indexing, which provides a fast and simple search option for most websites.\n\nid\nsite-search\nCan I import content from other platforms to Edge Delivery Services?\n\nYes, Edge Delivery Services supports content migration from various platforms, including WordPress, Unsplash, Contentful, other AEM instances, and more. The AEM team will help you in the import process, ensuring your content from any CMS is converted.\n\nid\nimporter\nHow do I configure Edge Delivery Services with Dynamic Media and Smart Cropping?\n\nSmart cropping can be configured within AEM Assets.\n\nLegal and Compliance\nWhat is the service level agreement (SLA) for Edge Delivery Services?\n\nThe SLA for Edge Delivery Services is the same as the SLA for AEM as a Cloud Service. You can check Adobe Status for availability of all Adobe services.\n\nid\nsla\nHow can I report the misuse of an Adobe product or service?\n\nContact abuse@adobe.com or notify Adobe Security.\n\nid\nmisuse","lastModified":"1761575047","labs":""},{"path":"/docs/indexing-reference","title":"Indexing reference","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"In your helix-query.yaml#, you can define one or more index definitions. A sample index definition looks as follows: https://gist.github.com/dominique-pfister/92cb67b6f95e1edee6a7d6508b124039","content":"style\ncontent\n\nIndexing reference\n\nIn your helix-query.yaml#, you can define one or more index definitions. A sample index definition looks as follows: https://gist.github.com/dominique-pfister/92cb67b6f95e1edee6a7d6508b124039\n\nThe include and exclude section dictates what documents get indexed. Everything that is included but not excluded gets indexed. The double asterisk ** matches everything under a prefix, including the prefix, so in the example above, the path /en gets indexed as well. If you leave out that section entirely, everything gets indexed.\n\nThe first index called english defines itself as default (using the ampersand). The second index called french uses that default definition and overrides some attributes.\n\nThe select property is a CSS selector that grabs all matching HTML elements out of your document. The indexer will apply your selectors on the HTML markup, not on the rendered DOM, so you must write them accordingly. (right click -> View Page Source, on the page you want to extract information from, to see the exact HTML the indexer will work on).\n\nIf you just want the first matching element to be returned, use selectFirst instead of select:\n\nfirst-img:\n  selectFirst: img\n  value: attribute(el, \"src\")\n\n\nTo verify that a CSS selector entered is selecting what you expect, you can use the aem cli - aem up -–print-index, navigate to the page where the selector should extract a meaningful value and check the console. The cli will use the helix-query.yaml file from your local filesystem and will print the extracted values, or an empty string if it cannot find the information it is looking for.\n\naem up --print-index\n...\ninfo: Index information for /my/test/page\ninfo: Index: mysite\ninfo:            author: \"John Smith\"\n\n\nNote, that not all CSS selectors are supported. Internally, we use a library called hast-util-select, and the list of supported selectors can be found here: https://github.com/syntax-tree/hast-util-select#support\n\nThe value or values property contains an expression to apply to all HTML elements selected. The property name value is preferred when you need a string, values on the other hand provides you with an array of all the matches found. The expression can contain a combination of functions and variables:\n\ninnerHTML(el)\n\nReturns the HTML content of an element.\n\n# Preserve links and formatting in a rich text snippet\nbio:\n select: main > div:first-child p:first-of-type\n value: innerHTML(el)\n\ntextContent(el)\n\nReturns the text content of the selected element, and all its descendents.\n\n# Extract the text of the first h1 on the page\nheadline:\n select: main h1\n value: textContent(el)\n\nattribute(el, name)\n\nReturns the value of the attribute with the specified name of an element.\n\ntitle:\n select: head > meta[property=\"og:title\"]\n value: attribute(el, \"content\")\n\nmatch(el, re)\n\nMatches a regular expression containing parentheses to capture items in the passed element. In the author example above, the actual contents of the <p> element selected might contain by John Smith, so it would capture everything following by .\n\n# Extract year from a date string like \"02/15/2025\"\nyear:\n select: head > meta[name=\"publication-date\"]\n value: match(attribute(el, \"content\"), \"\\\\d{2}\\\\/\\\\d{2}\\\\/(\\\\d{4})\")\n\ncharacters(el, start, end)\n\nReturns the substring from start to end of the given element or text. If start or end are negative, they address the position counted from the end of the text. end is optional, and defaults to the length of the text.\n\n# First 200 characters of a blog post for card previews\nsnippet:\n select: main > div p:first-of-type\n value: characters(textContent(el), 0, 200)\n\nwords(el, start, end)\n\nUseful for teasers, this selects a range of words out of an HTML element.\n\n# First 50 words of page content as a teaser\ndescription:\n select: main > div p\n value: words(textContent(el), 0, 50)\n\nreplace(el, substr, newSubstr)\n\nReplaces the first occurrence of a substring in a text with a replacement.\n\n# Remove a prefix from a title\ntitle:\n select: head > meta[property=\"og:title\"]\n value: replace(attribute(el, \"content\"), \"ACME Corp | \", \"\")\n\nreplaceAll(el, substr, newSubstr)\n\nReplaces all occurrences of a substring in a text with a replacement.\n\n# Replace all underscores with spaces\ncategory:\n select: head > meta[name=\"category\"]\n value: replaceAll(attribute(el, \"content\"), \"_\", \" \")\n\nparseTimestamp(el, format)\n\nParses a timestamp given as string in a custom format, and returns its value as number of seconds since 1 Jan 1970.\n\n# Parse an authored date in MM/DD/YYYY format\npublicationDate:\n select: head > meta[name=\"publication-date\"]\n value: parseTimestamp(attribute(el, \"content\"), \"MM/DD/YYYY\")\n\ndateValue(el, format)\n\nParses a timestamp given as string, and returns its value as serial number, where January 1, 1900 is serial number 1. For more information see DATEVALUE function\n\n# Returns an Excel serial date number (useful for spreadsheet sorting/filtering)\nlastUpdated:\n select: head > meta[name=\"last-updated\"]\n value: dateValue(attribute(el, \"content\"), \"YYYY-MM-DD\")\n\nel\n\nReturns the HTML elements selected by the select property.\n\n# Use el directly when select already targets the value you need\nheading:\n select: main h1\n value: textContent(el)\n\npath\n\nReturns the path of the HTML document being indexed.\n\nheaders[name]\n\nReturns the value of the HTTP response header with the specified name, at the time the HTML document was fetched.\n\nlastModified:\n select: none\n value: parseTimestamp(headers[\"last-modified\"], \"ddd, DD MMM YYYY hh:mm:ss GMT\")\n\nThe full definition of the helix-query.yaml is available here: https://github.com/adobe/helix-shared/blob/main/docs/indexconfig.md","lastModified":"1771955857","labs":""},{"path":"/docs/setup-byo-cdn-push-invalidation-for-akamai","title":"Setup push invalidation for Akamai","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Push invalidation automatically purges content on the customer's production CDN (e.g. www.yourdomain.com), whenever an author publishes content changes or a developer pushes code changes to ...","content":"style\ncontent\nSetup push invalidation for Akamai\n\nPush invalidation automatically purges content on the customer's production CDN (e.g. www.yourdomain.com), whenever an author publishes content changes or a developer pushes code changes to the main branch (changes on other branches do not trigger push invalidation).\n\nContent is purged by url and by cache tag/key.\n\nPush invalidation is enabled by setting up the Akamai Fast Purge credentials in the cdn section of the configuration service.\n\ncurl --request POST \\\n  --url https://admin.hlx.page/config/{org}/sites/{site}/cdn.json \\\n  --header 'content-type: application/json' \\\n  --data '{\n\t\"prod\": {\n\t\t\"host\": \"{production host}\",\n\t\t\"type\": \"akamai\",\n\t\t\"endpoint\": \"{akamai host}\",\n\t\t\"clientSecret\": \"{client_secret}\",\n\t\t\"clientToken\": \"{client_token}\",\n\t\t\"accessToken\": \"{access_token}\"\n\t}\n}'\n\n\nConfiguration properties:\n\nkey\t value\t comment \n host\t <Production Host>\t Host name of production site, e.g. www.yourdomain.com \n type\t akamai\t \n endpoint\t <host>\t\n\nFast Purge API credentials\n\n***.purge.akamaiapis.net\n\n\n clientSecret\t <client_secret>\t Fast Purge API credentials \n clientToken\t <client_token>\t Fast Purge API credentials \n accessToken\t <access_token>\t Fast Purge API credentials\n\nAEM push invalidation uses the Akamai Fast Purge API, specifically Delete by URL and Delete by cache tag.\n\nThe Fast Purge API credentials consist of\n\nhost = akaa-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.luna.akamaiapis.net\nclient_token = akab-XXXXXXXXXXXXXXXX-XXXXXXXXXXXXXXXX\nclient_secret = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\naccess_token = akab-XXXXXXXXXXXXXXXX-XXXXXXXXXXXXXXXX\n\n\nThey can be generated by following the instructions at Create an API client with custom permissions.\n\nGo to Identity & Access Management:\n\nCreate API client:\n\n\n\n\n\nRequired group/role permissions:\n\nYou can validate the credentials with this tool.\n\nSpecial Mention - Akamai Edge-Control Headers\n\nAEM uses a fine tuned, production hardened way to supply caching information that applies to the specific CDN, in conjunction with our reliable push invalidation. This allows us to improve cache efficiency and consistency over traditional TTL based approaches.\n\n\nEvery CDN vendor supports a way to directly influence how to instruct caching and we are excited to see standardization efforts like \"Targeted Cache Control\" (TCC) being on the roadmap for Akamai (see: https://www.akamai.com/blog/news/targeted-cache-control), in the meantime we are using Akamai's long-term supported Edge-Control header.","lastModified":"1763830947","labs":""},{"path":"/docs/setup-byo-cdn-push-invalidation-for-cloudflare","title":"Setup push invalidation for Cloudflare","image":"/docs/media_17531c5817dba9e27ed6963d25d92986d72d70014.jpg?width=1200&format=pjpg&optimize=medium","description":"Push invalidation automatically purges content on the customer's production CDN (e.g. www.yourdomain.com), whenever an author publishes content changes or a developer pushes code changes to ...","content":"style\ncontent\nSetup push invalidation for Cloudflare\n\nPush invalidation automatically purges content on the customer's production CDN (e.g. www.yourdomain.com), whenever an author publishes content changes or a developer pushes code changes to the main branch (changes on other branches do not trigger push invalidation).\n\nContent is purged by url and by cache tag/key.\n\nPush invalidation is enabled by registering a Cloudflare purge token in the cdn section of the configuration service.\n\ncurl --request POST \\\n  --url https://admin.hlx.page/config/{org}/sites/{site}/cdn.json \\\n  --header 'content-type: application/json' \\\n  --data '{\n\t\"prod\": {\n\t\t\"host\": \"{production host}\",\n\t\t\"type\": \"cloudflare\",\n\t\t\"plan\": \"enterprise\",\n\t\t\"zoneId\": \"{cloudflare_zone_id}\",\n\t\t\"apiToken\": \"{cloudflare_api_token}\"\n\t}\n}'\n\n\nConfiguration properties:\n\nkey\t value\t comment \n host\t <Production Host>\t Host name of production site, e.g. www.yourdomain.com \n type\t cloudflare\t \n plan\t e.g. free\t\n\nvalues: free, pro, business, enterprise\n\ndefault: free\n\n\n zoneId\t <Cloudflare Zone ID>\t ID of production zone \n apiToken\t <Cloudflare API Token>\t\n\nTo create an API Token,\n\ngo to API Tokens\nclick on \"Create Token\",\ngo to \"Create Custom Token\" at the bottom and click on \"Get started\"\nenter a token name (e.g. \"Production Site Purge Token\"),\nPermissions: \"Zone\", \"Cache Purge\", \"Purge\"\nZone Resources: \"Include\", \"Specific zone\", \"<your production zone>\"\nclick on \"Continue to summary\"\nclick on \"Create Token\",\ncopy the generated token value.\n\nNote that only sites on the enterprise plan will be surgically purged by url and cache key. A Purge All will be performed instead on non-enterprise sites every time an author publishes a content change.\n\nYou can validate the credentials with this tool.","lastModified":"1763830956","labs":""},{"path":"/docs/setup-byo-cdn-push-invalidation-for-cloudfront","title":"Set up push invalidation for AWS Cloudfront","image":"/docs/media_14f7e30bdfd5f95c1e5fca4e6ca48ccd78ff5d3c1.png?width=1200&format=pjpg&optimize=medium","description":"Push invalidation automatically purges content on the customer's production CDN (e.g. www.yourdomain.com), whenever an author publishes content changes or a developer pushes code changes to ...","content":"style\ncontent\nSet up push invalidation for AWS Cloudfront\n\nPush invalidation automatically purges content on the customer's production CDN (e.g. www.yourdomain.com), whenever an author publishes content changes or a developer pushes code changes to the main branch (changes on other branches do not trigger push invalidation).\n\nPush invalidation is enabled by setting up the Cloudfront purge credentials in the cdn section of the configuration service.\n\ncurl --request POST \\\n  --url https://admin.hlx.page/config/{org}/sites/{site}/cdn.json \\\n  --header 'content-type: application/json' \\\n  --data '{\n\t\"prod\": {\n\t\t\"host\": \"{production host}\",\n\t\t\"type\": \"cloudfront\",\n\t\t\"distributionId\": \"{distributionId}\",\n\t\t\"accessKeyId\": \"{accessKeyId}\",\n\t\t\"secretAccessKey\": \"{secretAccessKey}\"\n\t}\n}'\n\n\nNB: CloudFront does NOT support purging by cache tag/key. Purge by cache tag/key always triggers a purge all.\n\nConfiguration properties:\n\nkey\t value\t comment \n host\t <Production Host>\t Host name of production site, e.g. www.yourdomain.com \n type\t cloudfront\t \n distributionId\t <Cloudfront Distribution ID>\t \n accessKeyId\t <AWS Access key ID>\t AWS credentials \n secretAccessKey\t <AWS Secret access key>\t AWS credentials\nTo create the AWS credentials:\n\n\nIn the AWS Console, open the IAM dashboard, then select Policies → Create policy:\n\nIn the following screen, select \"CloudFront\" as a service, and \"CreateInvalidation\" as action, then click \"Add ARNs\" to restrict the permissions to a single distribution.\n\nEnter you Distribution Id and click on “Add ARNs”:\n\n\n\nProceed to “Next: Tags” and then “Next: Review”.\n\nEnter a name for the new policy, e.g. “AEM<YourSite>Invalidate”, and click on “Create policy”:\n\n\n\nIn the IAM dashboard, select Users → Create user\n\n\nEnter a user name (e.g. “Invalidator”) and click on Next:\n\nOn the “Set permissions” pane, select “Attach policies directly” and select the newly created policy (“AEM<YourSite>Invalidate” in our example):\n\nProceed to the next step, click on “Create user” end then “View user”:\n\nSelect the “Security credentials” tab and click on “Create access key”:\n\n\n\nSelect “Third-party service”, click the checkbox and proceed to “Next”:\n\nFinally, copy the Access key ID and Secret access key values:\n\nYou can validate the credentials with this tool.","lastModified":"1763830928","labs":""},{"path":"/docs/setup-byo-cdn-push-invalidation-for-fastly","title":"Setup push invalidation for Fastly","image":"/docs/media_17531c5817dba9e27ed6963d25d92986d72d70014.jpg?width=1200&format=pjpg&optimize=medium","description":"Push invalidation automatically purges content on the customer's production CDN (e.g. www.yourdomain.com), whenever an author publishes content changes or a developer pushes code changes to ...","content":"style\ncontent\nSetup push invalidation for Fastly\n\nPush invalidation automatically purges content on the customer's production CDN (e.g. www.yourdomain.com), whenever an author publishes content changes or a developer pushes code changes to the main branch (changes on other branches do not trigger push invalidation).\n\nContent is purged by url and by cache tag/key.\n\nPush invalidation is enabled by setting up the Fastly purge credentials in the cdn section of the configuration service.\n\ncurl --request POST \\\n  --url https://admin.hlx.page/config/{org}/sites/{site}/cdn.json \\\n  --header 'content-type: application/json' \\\n  --data '{\n\t\"prod\": {\n\t\t\"host\": \"{production host}\",\n\t\t\"type\": \"fastly\",\n\t\t\"serviceId\": \"{serviceId}\",\n\t\t\"authToken\": \"{authToken}\"\n\t}\n}'\n\n\nConfiguration properties:\n\nkey\t value\t comment \n host\t <Production Host>\t Host name of production site, e.g. www.yourdomain.com \n type\t fastly\t \n serviceId\t <Fastly Service ID>\t Service ID of production service \n authToken\t <Fastly API Token>\t\n\nTo create a Fastly API Token,\n\ngo to Personal API Tokens,\nclick on \"Create Token\",\nenter a name (e.g. \"Production Site Purge Token\"),\nselect \"A specific service\" and your production service from the drop-down list,\ncheck the \"Purge select content (purge_select) — Purge by URL or surrogate key\" check box,\nselect \"Never expire\",\nclick on \"Create Token\",\ncopy the generated token value shown in the pop-up window.\n\nYou can validate the credentials with this tool.","lastModified":"1763830964","labs":""},{"path":"/docs/setup-byo-cdn-push-invalidation","title":"Configuring push invalidation for BYO production CDN","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Push invalidation automatically purges content on the customer's production CDN (e.g. www.yourdomain.com), whenever an author publishes content changes or a developer pushes code changes to ...","content":"style\ncontent\n\nConfiguring push invalidation for BYO production CDN\n\nPush invalidation automatically purges content on the customer's production CDN (e.g. www.yourdomain.com), whenever an author publishes content changes or a developer pushes code changes to the main branch (changes on other branches do not trigger push invalidation).\n\nContent is purged by url and by cache tag/key.\n\nSetting up push invalidation requires 2 steps:\n\nConfiguration\nOpt-In Request Header\nConfiguration\n\nPush invalidation is currently supported for CDNs of the following vendors:\n\nFastly\nAkamai\nCloudflare\nCloudFront\nAdobe Managed\n\nPush invalidation is enabled by adding specific properties to the project's configuration (an Excel workbook named .helix/config.xlsx in Sharepoint or a Google Sheet named .helix/config in Google Drive).\n\nThe following sections describe the vendor specific properties required to set up push invalidation.\n\nhttps://main--helix-website--adobe.aem.page/docs/setup-byo-cdn-push-invalidation-for-fastly\nhttps://main--helix-website--adobe.aem.page/docs/setup-byo-cdn-push-invalidation-for-akamai\nhttps://main--helix-website--adobe.aem.page/docs/setup-byo-cdn-push-invalidation-for-cloudflare\nhttps://main--helix-website--adobe.aem.page/docs/setup-byo-cdn-push-invalidation-for-cloudfront\nhttps://main--helix-website--adobe.aem.page/docs/setup-byo-cdn-push-invalidation-for-managed\nOpt-In Request Header\n\nThe production CDN needs to send the following opt-in header to the origin in order to enable long cache TTLs:\n\nX-Push-Invalidation: enabled\n\nPrevious\n\nPlaceholders\n\nUp Next\n\nSitemap","lastModified":"1765959434","labs":""},{"path":"/developer/block-collection","title":"Block Collection","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"This is a collection of blocks considered a part of the AEM product and are recommended as blueprints for blocks in your project.","content":"style\ncontent\n\nBlock Collection\n\nThis is a collection of blocks considered a part of the AEM product and are recommended as blueprints for blocks in your project.\n\nThese blocks come from real production AEM projects. To be a part of this collection, a block needs to have a high use across a number of projects and provide enough abstract functionality and be general enough so it can be reused without having to change the underlying content model.\n\nAs the needs and designs of websites change, the block collection will change as well. Additions will be made to reflect emerging needs of projects, but blocks that are not used frequently enough will also be removed (deprecated).\n\nThere are few technical principles for the blocks in the collection:\n\nIntuitive: Content structure that’s intuitive and easy to author\nUseable: No dependencies, compatible with boilerplate\nResponsive: Works across all breakpoints\nContext Aware: Inherits CSS context such text and background colors\nLocalizable: No hard-coded content\nFast: No negative performance impact\nSEO and A11y: SEO friendly and accessible\n\nAll of the blocks can be considered as a basis for your own block development. It is very likely that you will change all the .css and .js code to meet your own project needs. The primary value of these blocks is the content structure they provide.\n\nConsidering that the code of your block will be fully adapted to your project, there is no intent for the blocks in the collection to be backwards compatible to their respective older versions or to make them upgradable.\n\nBoilerplate\n\nThe most commonly used blocks (as well as default content types) are curated in the AEM Boilerplate and are a part of every AEM project. For a block to become a part of boilerplate it has to be used by the vast majority of all AEM projects.\n\nThe code base for all the blocks in AEM Boilerplate is open-source and can be found on GitHub adobe/aem-boilerplate\n\nBlocks in AEM Boilerplate can be discovered using the sidekick library below, use the copy button to copy the corresponding content structure into your clipboard and paste into a document to see the content structure.\n\nHeadings\n\nDefault Content\n\nDifferent levels of headings provide the semantic backbone of your document\n\nText\n\nDefault Content\n\nBody text or copy with rich semantic formatting options\n\nImages\n\nDefault Content\n\nPictures bring your content alive\n\nLists\n\nDefault Content\n\nOrdered and unordered lists wherever they are needed\n\nLinks\n\nDefault Content\n\nReference other websites or your own content\n\nButtons\n\nDefault Content\n\nCall-to-action buttons and more\n\nCode\n\nDefault Content\n\nHighlight preformatted code snippets in your content\n\nSections\n\nDefault Content\n\nGroup content on your page into sections\n\nIcons\n\nDefault Content\n\nMake your content more interesting with icons\n\nHero\n\nBlock\n\nHero treatment at the top of a page\n\nColumns\n\nBlock\n\nFlexible way to handle multi-column layouts in a responsive way\n\nCards\n\nBlock\n\nList of cards with or without images and links\n\nHeader\n\nBlock\n\nFlexible header and navigation example\n\nFooter\n\nBlock\n\nSimple extensible footer block\n\nMetadata\n\nAdd metadata to your page where needed\n\nSection Metadata\n\nHighlight or structure all the content in a section\n\nBlock Collection\n\nThe block collection contains blocks that are commonly-used, but are not so common to be considered boilerplate. As a rule-of-thumb, to be included in the block collection a block must be used on more than half of all AEM projects.\n\nThe block collection can be the entry path into boilerplate code. Likewise if a block in the boilerplate is no longer used as much, it can be moved to this collection.\n\nThe code base for all the blocks in AEM Block Collection is open-source and can be found on GitHub adobe/aem-block-collection\n\nBlocks in AEM Block Collection can be discovered using the sidekick library below, use the copy button to copy the corresponding content structure into your clipboard and paste into a document to see the content structure.\n\nEmbed\n\nBlock\n\nA simple way to embed social media content into AEM pages\n\nFragment\n\nBlock\n\nShare pieces of content across multiple pages\n\nTable\n\nBlock\n\nA way to organize tabular data into rows and columns\n\nVideo\n\nBlock\n\nDisplay and playback videos directly from AEM\n\nAccordion\n\nBlock\n\nA stack of descriptive labels that can be toggled to display related full content\n\nBreadcrumbs\n\nBlock Add-on\n\nA list of page titles and relevant links showing the location of the current page in the navigational hierarchy\n\nCarousel\n\nBlock\n\nA dynamic display tool that smoothly transitions through a series of images with optional text content\n\nModal\n\nAutoblock\n\nA popup that appears over other site content\n\nQuote\n\nBlock\n\nA display of a quotation or a highlight of specific passage (or “pull quotes”) within a document\n\nSearch\n\nBlock\n\nAllows users to find site content by entering a search term\n\nTabs\n\nBlock\n\nSegment information into multiple labeled (or “tabbed”) panels\n\nForm\n\nBlock (Deprecated)\n\nA set of input controls grouped together that enables users to submit information\n\nThe block collection is continually evolving based on the feedback from the AEM community. If you think that there is a block that should be included in the block collection please speak to your AEM contact. Current candidates for inclusion in the block collection include:\n\nConsent Banner\n\nIf you have immediate need of a block that is not yet part of the collection, it is relatively easy to find AEM projects on GitHub that have example implementations for all of the above candidates.\n\nBlock Party\n\nThe Block Party is a place for the AEM developer community to showcase what they have built on AEM sites. It also allows others to avoid reinventing the wheel and reuse these blocks / code snippets / integrations built by the community and tweak the code as necessary to fit their own projects. See Block Party for everything it has to offer.\n\nNote: While we love and support our AEM developer community, Adobe is not responsible for maintaining or updating the code that is showcased in Block Party. Please use the code at your own discretion.\n\nPrevious\n\nAnatomy of a Project\n\nUp Next\n\nBlock Party","lastModified":"1725864574","labs":""},{"path":"/developer/block-collection/buttons","title":"Buttons","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Websites very often contain buttons, as call to actions or more generically. By default in the Boilerplate project buttons are created as a link in ...","content":"style\ncontent\n\nButtons\nNotes:\n\nWebsites very often contain buttons, as call to actions or more generically. By default in the Boilerplate project buttons are created as a link in a paragraph by itself.\n\nBold and Italic (<strong> and <em>) and possibly combinations thereof are used to specify certain types of buttons. There are often primary and secondary buttons and the default styling is usually defined for default content and/or specified by a containing block, and Bold and Italic can be used to specify alternative variations of buttons.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nSee Content in Document\n\nCode:\n\nAs Buttons are considered Default Content they are styled in project or block CSS code.\n\nThere is Javascript code for decoration purposes that is included in the default boilerplate behavior and it usually remains unchanged.\n\nDecoration Code\n\nThe CSS Styling is very project specific and gets adjusted as needed for a project or block by block.\n\nStyling Code\n\nAll the code above is part of the Boilerplate project and does not need to be copied.\n\nPrevious\n\nLinks\n\nUp Next\n\nCode","lastModified":"1772010561","labs":""},{"path":"/developer/block-collection/code","title":"Code","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Most technical documentation websites (including this one) have the need to display code. Most marketing websites don’t have that requirement, but since there is an ...","content":"style\ncontent\n\nCode\nNotes:\n\nMost technical documentation websites (including this one) have the need to display code. Most marketing websites don’t have that requirement, but since there is an intuitive and simple styling and markup the notion of code elements both inline and preformatted multiline is supported out of the box.\n\nFormatting something in word or gdoc as a fixed font (eg. Courier New) will automatically output a <code> element or a <code> and <pre> block for multiline.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nSee Content in Document\n\nCode:\n\nAs the code functionality is part of Default Content it is usually really just a matter of styling things according to the project specific CSS rules.\nThis code is part of boilerplate so there is no need to copy it.\n\nSee Boilerplate Styling\n\nPrevious\n\nButtons\n\nUp Next\n\nSections","lastModified":"1772010697","labs":""},{"path":"/developer/block-collection/footer","title":"Footer (Block)","image":"/developer/block-collection/media_17531c5817dba9e27ed6963d25d92986d72d70014.jpg?width=1200&format=pjpg&optimize=medium","description":"The footer block is loaded by default in the boilerplate project into the <footer> element. Out-of-the-box it provides a simple example for a footer but ...","content":"style\ncontent\n\nFooter (Block)\nNotes:\n\nThe footer block is loaded by default in the boilerplate project into the <footer> element.\nOut-of-the-box it provides a simple example for a footer but is likely to be extended or adjusted on the per project basis.\n\nThe footer block is usually not referenced by authors but is loaded automatically on every page.\n\nThe content for the footer is loaded as a fragment and is by default authored in a footer (or footer.docx) document.\nAs footer structure and designs change rarely and are usually visually very different from the rest of the blocks on a site, it is often a good strategy to divide the content into sections and decorate specific classes onto the sections based on their sequence and apply CSS styling to those classes.\n\nThe footer document has its own lifecycle and when previewed or published applies to all pages that use a given navigation.\n\nIt is not uncommon to have multiple footer documents for a site eg. one per locale / language.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nSee Content in Document\n\nCode:\n\nThis code is included in Boilerplate, there is no need to copy it.\n\nBoilerplate Block Code\n\nPrevious\n\nHeader\n\nUp Next\n\nMetadata","lastModified":"1772010851","labs":""},{"path":"/developer/block-collection/headings","title":"Headings","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Semantic headings are the backbone of any document structure. In documents you should always follow the semantic hierarchy of your document, meaning that a Heading ...","content":"style\ncontent\n\nHeadings\nNotes:\n\nSemantic headings are the backbone of any document structure. In documents you should always follow the semantic hierarchy of your document, meaning that a Heading 1 should contain a Heading 2 which in turn should contain a Heading 3 and so forth.\nIn cases where you find yourself using the headings out of sequence or leaving gaps in the heading hierarchy, that’s usually an indication that you are either trying to use headings to adjust to visual or design constraints or you are using headings for something that is semantically not a heading. Either of those can lead to bad results.\n\nAccording to Web best practices there should only be a single Heading 1 per page, which will also be used as the default title for the document.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nThe Content Structure leverages the built-in Heading 1 - Heading 6 mapped to h1 through h6.\n\nSee Content in Document\n\nCode:\n\nAs headings are default content they are styled in project or block CSS code. There is usually no JavaScript code used.\n\n\nThere is no code related to list generic styling in Boilerplate.\n\nPrevious\n\nBlock Collection\n\nUp Next\n\nText","lastModified":"1771931825","labs":""},{"path":"/developer/block-collection/icons","title":"Icons","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Most if not all websites have icons, therefore there is a simple way to reference icons for authors.","content":"style\ncontent\n\nIcons\nNotes:\n\nMost if not all websites have icons, therefore there is a simple way to reference icons for authors.\n\nIcons are referenced as :<iconname>: notation. As there are different ways to implement icons in the browser either as plain css classes, icon fonts or SVG, we offer a non-intrusive way to support all of those.\n\nThe boilerplate project includes an automatic mechanism to insert SVGs into the icon `<span>`s as that’s the most common way to deal with icons.\n\nWhile some icons need to be in the code (icons referenced in blocks for example), there are times when authors need to add and reference new icons and update them on an ongoing basis. These icons can and should live with the content under an /icons/ folder in the content source (eg. Sharepoint or Google Drive). These icons can also be referenced the exact same way using the :<iconname>: notation. This will allow marketers to add and update icons they need for content without any dependency on a code change.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nThe :<iconname>: can be inserted as a part of all Default Content constructs.\n\nSee Content in Document\n\nCode:\n\nIcons are Default Content and are styled in project specific CSS code. If there is any JavaScript that is required to load the SVGs it can be adapted as needed.\nThis code is included in Boilerplate, there is no need to copy it.\n\nSee SVG loading Code\n\nPrevious\n\nSections\n\nUp Next\n\nHero","lastModified":"1772011244","labs":""},{"path":"/developer/block-collection/links","title":"Links","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Hyperlinks are essential to connect websites and your content. To create a link, just use the insert link option in word or google doc.","content":"style\ncontent\n\nLinks\nNotes:\n\nHyperlinks are essential to connect websites and your content. To create a link, just use the insert link option in word or google doc.\n\nLinks can be added across all the default content and the different formatting options.\n\nIn Word and Google Docs only absolute links are accepted, which is usually easier to copy paste from your browser. Links are automatically converted to be relative to your site, while external links are kept absolute.\n\nLinks are often used beyond text links and reference for example embedded media or referenced fragments that are inlined in the page.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nSee Content in Document\n\nCode:\n\nAs links are considered Default Content they are styled in project or block CSS code. There is usually no JavaScript code used.\nThere is no link related styling code in the boilerplate project.\n\nSpecial Mention: Microsoft Word Online does not allow links on images, so a workaround would be to let authors put a link directly below the image and then wrap it on the client side, e.g.\n\n/**\n * Wraps images followed by links within a matching <a> tag.\n * @param {Element} container The container element\n */\nfunction wrapImgsInLinks(container) {\n  const pictures = container.querySelectorAll('picture');\n  pictures.forEach((pic) => {\n    const link = pic.nextElementSibling;\n    if (link && link.tagName === 'A' && link.href) {\n      link.innerHTML = pic.outerHTML;\n      pic.replaceWith(link);\n    }\n  });\n}\n\n\nSpecial Mention: It is recommended to handle certain links that need to be opened in a new window based on whether they are external links or PDFs (for example) on the client side, e.g.\n\n/**\n * Handles external links and PDFs to be opened in a new tab/window\n * @param {Element} main The main element\n */\nexport function decorateExternalLinks(main) {\n  main.querySelectorAll('a').forEach((a) => {\n    const href = a.getAttribute('href');\n    if (href) {\n      const extension = href.split('.').pop().trim();\n      if (!href.startsWith('/')\n        && !href.startsWith('#')) {\n        if (!href.includes('xyz.com') || (extension === 'pdf')) {\n          a.setAttribute('target', '_blank');\n        }\n      }\n    }\n  });\n}\n\n\n\nPrevious\n\nLists\n\nUp Next\n\nButtons","lastModified":"1725864574","labs":""},{"path":"/developer/block-collection/lists","title":"Lists","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Lists serve many purposes in the Web in general, some uses or obvious lists inside default content, while others are used in navigations or other ...","content":"style\ncontent\n\nLists\nNotes:\n\nLists serve many purposes in the Web in general, some uses or obvious lists inside default content, while others are used in navigations or other hierarchical constructs.\n\nExtraction of nested numbered lists and bullet lists is supported. Lists are converted to the <ol> and <ul> HTML tags respectively.\n\nComplex list items seem to be hard to manage in word processing without accidentally being broken up so it is generally recommended to keep lists relatively simple when it comes to the complexity of the items in the list.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nSee Content in Document\n\nCode:\n\nAs lists are considered Default Content they are styled in project or block CSS code. There is usually no JavaScript code used.\nThis code is included in Boilerplate, there is no need to copy it.\n\nView Code\n\nPrevious\n\nImages\n\nUp Next\n\nLinks","lastModified":"1772012312","labs":""},{"path":"/developer/block-collection/metadata","title":"Metadata Block","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"The Metadata table is handled by the pipeline service to add <meta> tags in the <head> of the HTML markup delivered from the service. It ...","content":"style\ncontent\n\nMetadata Block\nNotes:\n\nThe Metadata table is handled by the pipeline service to add <meta> tags in the <head> of the HTML markup delivered from the service. It does not appear verbatim in the HTML markup. There should only be one Metadata table per page and while its placement doesn’t matter, by convention it is placed at the bottom of the document.\n\nThe metadata table is essentially following an intuitive name/value pair structure where the name is in the first column of the table and the value is in the second column.\n\nThere are a few special properties that behave according to the HTML specification and popular additional metadata schemas like og: and twitter:. The well known metadata properties include title, description, and image. See special metadata properties for the full list.\n\nThere are also special semantics for theme and template which get added as classes to the <body> element by the boilerplate code and are often used for styling and autoblocking.\n\nBeyond that, a project can add an arbitrary number of name value pairs that get added as <meta> tags to the markup, and can be used with project specific semantics.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nSee Content in Document\n\nCode:\n\nThe metadata table is processed as part of the HTML rendering service. There is no project code related to the processing.\n\nPrevious\n\nFooter\n\nUp Next\n\nSection Metadata","lastModified":"1772012364","labs":""},{"path":"/developer/block-collection/section-metadata","title":"Section Metadata","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"The Section Metadata table is handled by boilerplate code internally to add data-*s attributes to the containing section.","content":"style\ncontent\n\nSection Metadata\nNotes:\n\nThe Section Metadata table is handled by boilerplate code internally to add data-*s attributes to the containing section.\n\nSection Metadata table follows an intuitive name/value pair structure where the name is in the first column of the table and the value is in the second column.\n\nThe Style property is translated into a class while any other name will be transformed into a data-* attribute.\n\nAs Section Metadata generally adds complexity for authors, it is recommended to avoid it, until it is really necessary.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nSee Content in Document\n\nCode:\n\nThe section metadata table is processed as part of the boilerplate code. There is no project code related to the processing.\n\nPrevious\n\nMetadata\n\nUp Next\n\nBlock Collection","lastModified":"1772012703","labs":""},{"path":"/developer/block-collection/sections","title":"Sections","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Sections are the top level grouping mechanism in documents, think of them as containers for a set of default content and blocks. Learn more about ...","content":"style\ncontent\n\nSections\nNotes:\n\nSections are the top level grouping mechanism in documents, think of them as containers for a set of default content and blocks. Learn more about the Document Structure here.\n\nSections are separated by “Horizontal Rules” or --- to group certain elements of a page together. There may be both semantic and design reasons to group content together, a simple case could be that a section of a page has a different background color.\n\nTechnically a section just introduces a <div> wrapper in the markup delivered around all the blocks and default content contained in the section.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nSee Content in Document\n\nCode:\n\nIn most cases generic sections don’t have much styling code beyond project specific box layout (eg. margins, padding, max-width) and are sometimes augmented with section metadata to control styling (often background colors or images).\n\nSee Section Styling Code\n\nPrevious\n\nCode\n\nUp Next\n\nIcons","lastModified":"1772012751","labs":""},{"path":"/developer/block-collection/text","title":"Text","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"A Text Paragraph (or Copy) is the most common element on websites. AEM understands and translates a number of semantic formatting like bold, italic, underline, ...","content":"style\ncontent\n\nText\nNotes:\n\nA Text Paragraph (or Copy) is the most common element on websites. AEM understands and translates a number of semantic formatting like bold, italic, underline, strike-through as well as subscript and superscript, which are translated to their respective semantic HTML tags of <strong>, <em>, <u>, <s>, <sup> and <sub>.\n\nCSS styling may take hints from these formatting options and use them to express visually very different styling. Both a paragraph and line-feed are supported.\n\nThe first portion of the first text paragraph serves as the default description for the a page if nothing else is specified in metadata.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nThe content structure is based on the simple Normal Text paragraphs in word or google doc.\n\nSee Content in Document\n\nCode:\n\nAs text is Default Content it is styled in project or block CSS code. There is usually no JavaScript code used.\nThis code is included in boilerplate, there is no need to copy it.\n\nhttps://github.com/adobe/helix-project-boilerplate/blob/27e8571592220da8ded7c8a7e5064d982f7cfe76/styles/styles.css#L45-L51\n\nPrevious\n\nHeadings\n\nUp Next\n\nImages","lastModified":"1772013202","labs":""},{"path":"/developer/example-form/thank-you","title":"Thank you for your submission.","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"We will not contact you of course, as this is just a demo form.","content":"Thank you for your submission.\n\nWe will not contact you of course, as this is just a demo form.","lastModified":"1725864574","labs":""},{"path":"/developer/favicon","title":"Favicon","image":"/developer/media_16f2a10bfad070c49499f17bcb04684fd1bf91c1e.jpg?width=1200&format=pjpg&optimize=medium","description":"Adding a favicon to your site gives it a professional look in your visitor’s browsers:","content":"style\ncontent\n\nFavicon\n\nAdding a favicon to your site gives it a professional look in your visitor’s browsers:\n\nAdding a favicon\n\nThe easiest way is to add a file called favicon.ico to the root folder in your code repository. We recommend using the .ico format for best support across all major browsers. That’s all – your site now has a favicon!\n\nGone repoless?\n\nIf you are reusing the same code repository for multiple sites (see Repoless) and you need different favicons for some or all of them, you can add the favicon.ico file to the root folder of each content source instead, and preview and publish it using the Sidekick or Admin API.\n\nPrevious\n\nFastly Setup\n\nUp Next\n\nRedirects","lastModified":"1760082379","labs":""},{"path":"/developer/forms","title":"Forms","image":"/developer/media_11ab97601e91b12d6da7e6f92c0236759f761aad5.jpg?width=1200&format=pjpg&optimize=medium","description":"Edge Delivery Services for AEM Forms allows you to update, publish, and launch new forms rapidly. These forms are easy to author and develop. You ...","content":"style\ncontent\n\nForms\n\nEdge Delivery Services for AEM Forms allows you to update, publish, and launch new forms rapidly. These forms are easy to author and develop. You can\n\nCreate forms with tools of your choice: You can use Document-based Authoring (Microsoft SharePoint or Google Drive with Microsoft Excel or Google Sheets) or WYSIWYG Authoring (Universal Editor) to create forms.\nCreate forms optimized for speed, performance, and higher conversions: Deliver forms experiences that load and render quickly and continuously monitor your forms performance through Operational Telemetry.\nUse developer friendly toolset: Edge Delivery Services for AEM Forms uses plain HTML, modern CSS, and vanilla JavaScript to create exceptional forms experiences. A developer with basic web development skills can customize and easily build form components and forms experiences. There is no need to wait for a pipeline to run, just check-in your code into GitHub and your changes are live.\nUse Forms Submission Service: The Forms Submission Service lets you save form submissions directly to spreadsheets like OneDrive, SharePoint, or Google Sheets, even if the spreadsheet isn’t managed by Edge Delivery Services.\n\nNote: To use Edge Delivery Services for AEM Forms, a valid AEM Forms license is required. Refer to the Product Description for licensing details.\n\nKey Features of Document-based Authoring and WYSIWYG Authoring\n\nThe choice between Document-based Authoring and WYSIWYG Authoring depends on your needs: Use Document-based Authoring for simple, spreadsheet-like forms with basic fields and quick data connectivity, while WYSIWYG Authoring is ideal for complex forms requiring multiple panels, business logic, data integration, and AEM Workflows.\n\nFeatures\nDocument-based Authoring\nWYSIWYG Authoring\nAccessible components for a user-friendly experience.\nY\nY\nStandardized HTML structure for consistent rendering\nY\nY\nY\nRules and validations to ensure data accuracy\nY\nY\nFile attachment options for collecting additional information\nY\nY\nAbility to create custom form components for specific needs\nY\nY\nSubmit form data directly to Microsoft Excel or Google Sheets or email addresses\nY\nY\nMonitor your forms performance through Operational Telemetry\nY\nY\nAdvanced rules editor for creating complex logic\nY\nClient-side extensibility for custom functionalities.\nY\nWYSIWYG editing experience for easy form creation and visualization.\nY\nDocument of record functionality to create tamper-proof archives of submitted data\nY\nGoogle reCAPTCHA integration for spam protection.\nY\nIntegration with Adobe Workfront Fusion to trigger Adobe Workfront Fusion scenarios upon form submission.\nY\nIntegration with various data sources for pre-populating forms and submitting data\nY\nForm Data Model (FDM) for defining data structure and interactions with various data sources.\nY\nCustom Submit Action for handling form submissions\nY\nSubmit to Microsoft SharePoint\nY\nSubmit to Microsoft OneDrive\nY\nSubmit to Azure Blob Storage\nY\nSubmit to REST endpoint\nY\nInvoke an AEM Workflow\nY\nInvoke a Power Automate flow\nY\nSubmit to Marketo Engage\nY\nSubmit to Adobe Experience Platform (AEP)\nY\nSubmit to Spreadsheet\nY\nSubmit using Form Data Model (FDM)\nY\nConnect to Salesforce application\nY\nConnect to Microsoft Dynamics OData\nY\n\nIn essence, WYSIWYG Authoring builds upon the foundation of Document-based Authoring, providing a more advanced toolkit for creating and managing complex forms.\n\nGet Started\n\nEdge Delivery Services for AEM Forms provides Adaptive Forms Block to allow you to create and render forms. You can style various components of the form as per your requirements.\n\nCreate or configure your AEM Edge Delivery Services Github Project\nIf you have an existing AEM Edge Delivery Services Github Project, you can integrate the Adaptive Forms Block into your current project to get started on form creation.\nIf you don’t have an existing AEM Edge Delivery Services Github Project, create a new AEM project pre-configured with Adaptive Forms Block.\nCreate a form and add it to your AEM Edge Delivery Services page\n\nWYSIWYG authoring:\n\nCreate a form using Universal Editor\nAdd Dynamic Behavior to Forms\nConfigure and Customize Form Submit Actions\nPublish and Deploy Forms\nStyling and Theming Guide\nProtect Your Forms from Spam: Adding reCAPTCHA Security\nBuild Custom Form Components\n\nDocument-based authoring:\n\nCreate a form using Google Sheets or Microsoft Excel\nSet up your Google Sheets or Microsoft Excel files to start accepting data\nPublish your form and start collecting data\nCustomize the look of your forms\n\nFor more details about Adaptive Forms Block, check out AEM Forms Edge Delivery Services documentation.\n\nPrevious\n\nCustom Headers\n\nUp Next\n\nIndexing","lastModified":"1765382030","labs":""},{"path":"/developer/indexing","title":"Indexing","image":"/developer/media_154896ddb0d10ee236adc3592217d30238ede804c.jpeg?width=1200&format=pjpg&optimize=medium","description":"Adobe Experience Manager offers a way to keep an index of all the published pages in a particular section of your website. This is commonly ...","content":"style\ncontent\n\nIndexing\n\nAdobe Experience Manager offers a way to keep an index of all the published pages in a particular section of your website. This is commonly used to build lists, feeds, and enable search and filtering use cases for your pages or content fragments.\n\nAEM keeps this index in a spreadsheet and offers access to it using JSON. Please see the document Spreadsheets and JSON for more information.\n\nWe will introduce the concept of creating a query index by previewing an Excel workbook or Google spreadsheet first. Note, that if you already have a custom query definition in a file called helix-query.yaml in your GitHub repository, it is no longer possible to create indexes that way. Every new index will have to be manually added to that helix-query.yaml.\n\nSetting Up an Initial Query Index\n\nIn this section we’ll create a query index in the root folder that will index all documents in your backend.\n\nAfter setting up your fstab.yaml with a mountpoint that points into your SharePoint site or Google Drive, go to the root folder.\nDepending on your backend, create either a workbook named query-index.xlsx for SharePoint or a spreadsheet named query-index for Google Drive.\nIn that spreadsheet or workbook, create a sheet named raw_index.\nSetting Up Properties to be Added to the Index\nIn your query-index document, add a header line and in the first column add path as the header name.\nIn the following columns of the header line, add all other properties you need extracted from the rendered HTML page.\n\nIn the following example in Google Drive, the extracted fields are title, image, description, and lastModified.\n\n\nPages are indexed when they are published. To remove pages from index, they have to be unpublished.\n\nFor simple scenarios without custom index definition, pages that have robots metadata property set to noindex will automatically be omitted from indexing by AEM. (There are a few special scenarios here, for more details see the section Special Scenarios for Robots).\n\n\nThe following table summarizes the properties that are available and from where in the HTML page they’re extracted.\n\nName\t Description author\t Returns the content of the meta tag named author in the head element. \n title\t Returns the content of the og:title meta property in the head element. \n date\t Returns the content of the meta tag named publication-date in the head element. \n image\t Returns the content of the og:image meta property in the head element. \n category\t Returns the content of the meta tag named category in the head element. \n tags\t\n\nReturns the content of the meta tag named article:tag in the head element as an array.\n\nSee the document Spreadsheets and JSON for more information on array-handling.\n\n\n description\t Returns the content of the meta tag named description in the head element. \n robots\t Returns the content of the meta tag named robots in the head element. \n lastModified\t Returns the value of the Last-Modified response header for the document.\n\nFor every other header added, the indexer will try to find a meta tag with a corresponding name.\n\nActivate Your Index\n\nTo activate your index, preview the spreadsheet using the sidekick. This will create an index configuration.\n\nChecking Your Index\n\nThe Admin Service has an API endpoint where you can check the index representation of your page. Given your GitHub owner, repository, branch and owner, and a resource path to a page, its endpoint is:\n\nhttps://admin.hlx.page/index/<owner>/<repo>/<branch>/<path>\n\nYou should get a JSON response where the data node contains the index representation of the page.\n\nDebugging Your Index Configuration\n\nThe AEM CLI has a feature where it will print the index record whenever you change your query configuration, which assists in finding the correct CSS selectors:\n\n$ aem up --print-index\n\nPlease see the AEM CLI GitHub documentation for more information and watch this video to learn more about this feature.\n\nSetting Up More Index Configurations\n\nYou can define your own custom index configurations by creating your own helix-query.yaml. This allows you to have more than one index configuration in the same helix-query.yaml, where parts of your sites are indexed into different Excel workbooks or Google spreadsheets. See the document Indexing reference for more information.\n\nSpecial Scenarios for Robots\n\nThere are a few nuances on how pages get indexed by AEM in conjunction with indexing setup for your site. Let’s look at them:\n\nIn the following 2 situations, setting robots to noindex on the page metadata would not prevent it from being indexed by AEM:\n\nYou have added a robots column in query-index.xlsx\nYou have a helix-query.yaml in your Github repository i.e. you have defined a custom index definition.\nRecommendations\nIf you do not have a custom index definition, it is recommended to not add a robots column to your index sheet unless you have a requirement for doing so.\nAdding robots column to your index sheet would cause a page to be indexed by AEM even though it may have robots metadata set to noindex.\nIf you do have a custom index definition, pages would get indexed by AEM irrespective of setting robots to noindex on the page metadata. If you want to prevent this from happening, you can use spreadsheet filters to omit pages from index that have robots metadata set to noindex. For more details, see the section titled \"Enforcing noindex configuration with custom index definitions\" below.\nEnforcing “noindex” configuration with custom index definitions\n\nIf you have defined your own custom index definitions in helix-query.yaml, setting the robots property to noindex is not effective in preventing the pages from getting indexed. In order to enforce noindex configuration is such situations, do the following:\n\nCreate a sheet named “helix-default” in your query-index.xlsx . After this, your query-index.xlsx spreadsheet should have 2 sheets “raw_index” and “helix-default”. The “raw_index” sheet is there to have all the raw indexed data.\nModify your custom helix-query.yaml (it must be in your project’s Github repository) and add the robots property so that it gets indexed.\nNow set up your “helix-default” sheet in the query-index.xlsx spreadsheet to get automatically filled up using Excel formula which ensures that all the rows in raw_index which have robots property set as noindex, do not get copied over to the helix-default sheet. This can be done by using an Excel formula like this =FILTER(Table1,NOT(Table1[robots]=\"noindex\"))\nNow your helix-default sheet has only the rows from raw_index that do not have robots property set to noindex.\nEnsure that you publish the pages that you want to get indexed.\nNow if you fetch the index as usual like: https://<branch>--<repo>-<org>.hlx.page/query-index.json, you’d only get data from helix-default sheet i.e. entries that are not explicitly prevented from getting indexed through the robot property set as noindex.\n\nPrevious\n\nForms\n\nUp Next\n\nKeeping it 100","lastModified":"1725864574","labs":""},{"path":"/developer/keeping-it-100","title":"Web Performance, Keeping your Lighthouse Score 100.","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"The quality of the experience of websites is crucial to achieving the business goals of your website and the satisfaction of your visitors.","content":"style\ncontent\n\nWeb Performance, Keeping your Lighthouse Score 100.\n\nThe quality of the experience of websites is crucial to achieving the business goals of your website and the satisfaction of your visitors.\n\nAdobe Experience Manager (AEM) is optimized to deliver excellent experiences and optimal web performance. With the Real Use Monitoring (RUM) operations data collection, information is continuously collected from field use and offers a way to iterate on real use performance measurements without having to wait for the CRuX data to show effects of code and deployment changes. It is common for field data collected in RUM to deviate from the lab results, as the network, geo-location and processing power for real devices are much more diverse than the simulated conditions in a lab.\n\nThe Google PageSpeed Insight Service is proven to be a great lab measurement tool. It can be used to avoid the slow deterioration of the performance and experience score of your website.\n\nServer-sided vs. Client-sided rendering\n\nAll canonical content on a page is rendered into markup on the server. Decorations of the content via CSS and DOM only affect display and refined, specialized semantics for accessibility and beyond.\nClient-sided rendering (as in fetch JSON) and render significant portions of the page on the client are only used in situations where there is no canonical content of the page (e.g. blocks that list other pages, applications, etc.)\n\nRedundant content that is semantically not a part of the canonical content of the page, is not included in the markup for performance considerations (slowing down LCP, introducing unnecessary blocking time measured via TBT for INP by proxy), this includes headers and footers, and fragments that are used redundantly on a large number of pages.\n\nCore Web Vitals (CWV) and Lighthouse via PageSpeed Insights.\n\nThe performance of a website impacts its rankings in search results, as well as actual end-user performance is reflected by the Core Web Vitals (CWV) in the crUX report. CWV is the ultimate arbiter of what good vs. bad web performance (and technical user experience) is for the visitors to your website.\n\nCWV as metrics collected in the real-world (field data) are as much a function of the code, the network setup as well as the devices your visitors use. With PageSpeed insights, Google offers an isolated service that runs Google's Lighthouse scores, in a standardized configuration selected by Google based on global distribution of mobile and desktop devices.\n\nLighthouse (LH) scores via PageSpeed Insights provide a valuable and reliable proxy in a lab environment that allows you to make relative assertions about changes in your code. An improved Lighthouse score on LCP and CLS will yield an improved CWV score. Conversely a worse or unchanged LH score will very likely have the corresponding effect on CWV.\n\n\nTo avoid using a project specific configuration for Lighthouse testing, we use the continuously-updated configurations seen as part of the mobile and desktop strategies referenced in the latest versions of the Google PageSpeed Insights API.\n\nWhile there may be additional insight that some developers feel they can collect from other ways of measuring Lighthouse scores, to be able to have a meaningful and comparable performance conversation across projects, there needs to be a way to measure performance universally. The default PageSpeed Insight Service is the most authoritative, most widely accepted lab test when it comes to measuring your performance.\n\nHowever it is important to remember that the recommendations that you get from PageSpeed Insights do not necessarily lead to better results, especially the closer you get to a lighthouse score of 100.\n\nCore Web Vitals (CWV) collected by the built-in Operational Telemetry data plays an important role in validating results quickly in the field. For minor changes, however, the variance of the results and the lack of sufficient data points (traffic) over a short period of time makes it impractical to get statistically relevant results in most cases.\n\nGetting Started with Web Performance\n\nWhen you start your project with the Boilerplate as in the Developer Tutorial, you will get a very stable Lighthouse score on PageSpeed Insight for both Mobile and Desktop at 100. On every component of the lighthouse score there is some buffer for the project code to consume and still be within the boundaries of a perfect 100 lighthouse score.\n\nTesting Your Pull Requests\n\nIt turns out that it is hard to improve your Lighthouse score once it is low, but it is not hard to keep it at 100 if you continuously test.\n\nWhen you open a pull request (PR) on a project, the test URLs in the description of your project are used to run the PageSpeed Insights Service against. The AEM GitHub bot will automatically fail your PR if the score is below 100 with a little bit of buffer to account for some volatility of the results.\n\nThe results are for the mobile lighthouse score, as they tend to be harder to achieve than desktop.\n\nThree-Phase Loading (E-L-D)\n\nDissecting the payload that's on a web page into three phases makes it relatively straight-forward to achieve a clean lighthouse score and therefore set a baseline for a great customer experience.\n\nThe three phase loading approach divides the payload and execution of the page into three phases\n\nPhase E (Eager): This contains everything that's needed to get to the largest contentful paint (LCP).\nPhase L (Lazy): This contains everything that is controlled by the project and largely served from the same origin.\nPhase D (Delayed): This contains everything else such as third-party tags or assets that are not material to experience.\nPhase E: Eager\n\nBefore anything happens, it is important to note that the body must be hidden (with display:none) to make sure no images start downloading and to avoid initial CLS.\n\nIn the eager phase, the first action is to “decorate” the DOM: the loading sequence makes few adjustments, mainly adds CSS classes to icons, buttons, blocks and sections and creates the auto-blocks. See the Markup, Sections, Blocks, and Auto Blocking page for more details on the resulting markup.\n\nThe body can then already be displayed, considering that sections are not loaded yet and remain hidden.\n\nThen, the full first section is loaded with a priority given to the first image of this section, the “LCP candidate”. In theory, the fewer blocks the first section has, the faster LCP can be loaded.\n\nOnce the LCP candidate and all blocks of the section are loaded, the first section can be displayed and the fonts can start loading asynchronously.\n\nThis ends the eager phase.\n\nLCP\n\nIn general, the LCP is the “hero“ image displayed at the top of a page. It is crucial to make sure this image is loaded and displayed as soon as possible in the loading sequence (see the Eager phase).\n\nEverything that's needed to be loaded for the true LCP to be displayed must be loaded. In a project, this usually consists of the markup, the CSS styles and JavaScript files.\n\nIn many cases the LCP element is contained in a block, where the block .js and and .css also have to be loaded.\n\nIt is a good rule of thumb to keep the aggregate payload before the LCP is displayed below 100kb, which usually results in an LCP event quicker than 1560ms (LCP scoring at 100 in PSI). Especially on mobile the network tends to be bandwidth constrained, so changing the loading sequence before LCP has minimal to no impact.\n\nLoading from or connecting to a second origin before the LCP occurred is strongly discouraged as establishing a second connection (TLS, DNS, etc.) adds a significant delay to the LCP.\n\nThere are situations where the actual LCP element is not included in the markup that is transmitted to the client. This happens when there is an indirection or lookup (for example a service that’s called, a fragment that’s loaded or a lookup that needs to happen in a .json) for the LCP element.\nIn those situations, it is important that the page loading waits with guessing the LCP candidate (currently the first image on the page) until the first block has made the necessary changes to the DOM.\n\nThere are other situations where the content contains 2 hero images, one for desktop, one for mobile. Same as above, it is important to make sure that the correct image is considered as the LCP candidate and the “hero” block might need to be adjusted to remove the unnecessary image from the DOM (remove the desktop image on mobile devices or vice versa) to not load a bandwidth consuming image or even worse, load the unnecessary image first as the LCP candidate.\n\nFinally, the LCP can be something other than an image, a video, a long text… For all those cases, a deep understanding of the loading sequence and how the LCP candidate is computed is necessary to make the correct optimizations.\n\nPhase L: Lazy\n\nIn the lazy phase, the portion of the payload is loaded that doesn't affect total blocking time (TBT) and ultimately first input delay (FID).\n\nThis includes things like loading the next sections and their blocks (JavaScript and CSS) as well as loading all the remaining images according to their loading=\"lazy\" attribute and other JavaScript libraries that are not blocking. The lazy phase is generally everything that happens in the various blocks you are going to create to cover the project needs.\n\nIn this phase it would still be advisable that the bulk of the payload come from the same origin and is controlled by the first party, so that changes can be made if needed to avoid negative impact on TBT, TTI and FID.\n\nPhase D: Delayed\n\nIn the delayed phase, the parts of the payload are loaded that don't have an immediate impact to the experience and/or are not controlled by the project and come from third parties. Think of marketing tooling, consent management, extended analytics, chat/interaction modules etc. which are often deployed through tag management solutions.\n\nIt is important to understand that for the impact on the overall customer experience to be minimized, the start of this phase needs to be significantly delayed. The delayed phase should be at least three seconds after the LCP event to leave enough time for the rest of the experience to get settled.\n\nThe delayed phase is usually handled in delayed.js which serves as an initial catch-all for scripts that cause TBT. Ideally, the TBT problems are removed from the scripts in question either by loading them outside of the main thread (in a web worker) or by just removing the actual blocking time from the code. Once the problems are fixed, those libraries can easily be added to the lazy phase and be loaded earlier.\n\nIdeally there is no blocking time in your scripts, which is sometimes hard to achieve as commonly used technology like tag managers or build tooling create large JavaScript files that are blocking as the browser is parsing them. From a performance perspective it is advisable to remove those techniques, make sure your individual scripts are not blocking and load them individually as separate smaller files.\n\nHeader and Footer\n\nThe header and specifically the footer of the page are not in the critical path to the LCP, which is why they are loaded asynchronously in their respective blocks. Generally, resources that do not share the same life cycle (meaning that they are updated with authoring changes at different times) should be kept in separate documents to make the caching chain between the origins and the browser simpler and more effective. Keeping those resources separate increases cache hit ratios and reduces cache invalidation and cache management complexity.\n\nFonts\n\nSince web fonts are often a strain on bandwidth and loaded from a different origin via a font service like https://fonts.adobe.com or https://fonts.google.com, it is largely impossible to load fonts before the LCP, this is why they are loaded right after.\n\nBy default, the AEM Boilerplate implements the font fallback technique to avoid CLS when the font is loaded. It would be counterproductive to preload the fonts (via Early hints, h2-push or markup) and largely impact the performances.\n\nBonus: Speed is Green\n\nBuilding websites that are fast, small, and quick to render is not just a good idea to deliver exceptional experiences that convert better, it is also a good way to reduce carbon emissions.\n\nCommon Sources of Performance Issues\n\nOver time, we gathered a collection of anti-patterns that negatively impact performance, and need to be avoided to be compliant with the best practices in this document.\n\nEarly hints / h2-push / pre-connect are part of the network budget\n\nInstinctively, it would make sense to tell the browser to download as much as possible and as early as possible, even before the markup processing even starts. But remember, the ultimate goal is to have a stable page to show to the visitor as quickly as possible. LCP timing is a good proxy for that.\n\nAs a rule of thumb, to get an LCP to 100 on Mobile with PageSpeed Insight the network constraints are set up in a way that there can only be a single host with a network payload that's not exceeding 100kb, as the setup is largely bandwidth constrained. Early hints, h2-push and pre-connect consume that bandwidth, by downloading resources that are not required for LCP and therefore negatively impact the performance, and have to be removed.\n\nRedirects for paths resolution\n\nIf a visitor requests www.domain.com and gets redirected to www.domain.com/en and then to www.domain.com/en/home, they get a penalty for each redirect, i.e. performance is negatively impacted. This is mostly visible in Core Web Vitals measured via RUM or CrUX as lab results in PageSpeed Insights by default exclude redirect overhead from the lab test.\n\nCDN client scripts injection\n\nOur markup but also our .aem.page and .aem.live origins are optimized for performance and we are extremely careful with any part of the payload, as well as the loading sequence for resources.\n\nSome CDN vendors and configurations inject scripts that are consuming bandwidth and create blocking time, before LCP with negative impacts performance. Those scripts should be disabled, or loaded appropriately in the loading sequence after LCP.\n\nA comparison of a .aem.live origin of the PageSpeed Insight report, with the corresponding site that's fronted by a customers CDN (e.g. production site) will show the negative impact produced by a CDN outside of AEM's control.\n\nCDN TTFB and Protocol Implementation Impact\n\nDepending on the CDN vendor, there are differences in protocol implementations and performance characteristics for the pure delivery of the HTTP payload. Additional tooling like WAF and other network infrastructure upstream of AEM may also negatively impact performance.\nA comparison of a .aem.live origin of the PageSpeed Insight report, with the corresponding site that's fronted by a customers CDN (e.g. production site) will show the negative impact produced by a CDN outside of AEM's control.\n\nPrevious\n\nIndexing\n\nUp Next\n\nMarkup - Sections","lastModified":"1762453422","labs":""},{"path":"/developer/markup-reference","title":"HTML Markup reference","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"While for most development tasks the DOM is the relevant interface for a developer, there are certain situations (eg. Auto Blocking) where a developer interacts ...","content":"style\ncontent\n\nHTML Markup reference\n\nWhile for most development tasks the DOM is the relevant interface for a developer, there are certain situations (eg. Auto Blocking) where a developer interacts with the raw HTML markup that is rendered by the Franklin pipeline on the server.\n\nGeneral HTML Markup Document structure\n<!DOCTYPE html>\n<html>\n  <head>\n    <title>...</title>\n    {metadata}\n    {head.html}\n  </head>\n  <body>\n    <header></header>\n    <main>\n\t{main-content}\n    </main>\n    <footer></footer>\n  </body>\n</html>\n\nMetadata\n\nThe metadata portion of the document consists of a list of <meta> tags, some of them correspond to well known HTML metadata or metadata data without defined semantics. See more in the metadata block specification.\n\nIn the simplest case there is a eg.\n\n<meta name=\"template\" content=\"docs\">\n\nwhich maps a metadata property named template to a value of docs.\n\nhead.html\n\nThe head.html portion of the document contains verbatim of what’s found in the head.html of the corresponding github branch.\n\nMain Content\n\nAll the content that semantically maps to document semantics of an individual page or fragment is found in the main-content content section of the markup. It consists of default content, sections and blocks.\n\nIt is important that both default content and any cell of a block can contain the following HTML tags and attributes.\n\n<a>\n<br>\n<code>\n<del>\n<em>\n<h1> to <h6>\n<img>\n<li>\n<ol>\n<p>\n<picture>\n<pre>\n<source>\n<span>\n<strong>\n<sub>\n<sup>\n<table>\n<tbody>\n<td>\n<th>\n<thead>\n<tr>\n<u>\n<ul>\n\n\nWhile most of those are self explanatory and align very much with their respective semantic definitions there are a couple that are worth calling out.\n\n<h1> to <h6>\n\nHeadings 1 through 6 have an id= attribute containing a sanitized version of the .innerText to allow for direct addressing of a heading in a URL fragment\n\n<picture>, <source> and <img>\n\nImages are rendered as a <picture>, <source> and <img> combination, with a mobile and desktop breakpoint, as well as a fallback for browsers that don’t support webp.\n\nThe <img> element features a height= and width= attribute containing the intrinsic dimensions of the image as well as an alt= attribute with the alt text as provided.\n\n<span>\n\nThe use of <span> is limited to icons, and indicates the icon as part of the class=\"icon icon-<iconname>\" attribute\n\n<a>\n\nThere is special handling for links to AEM preview and live domains (hlx.page, hlx.live, aem.page, aem.live). Any link to one of these domains is rendered as a relative link. This allows authors to link to any of these domain from content pages and still work across preview, live, and production domains.","lastModified":"1725864574","labs":""},{"path":"/developer/markup-sections-blocks","title":"Markup, Sections, Blocks, and Auto Blocking","image":"/developer/media_1a70c29a5d772d0dc5f0cd8d513af41df5bb8177d.jpeg?width=1200&format=pjpg&optimize=medium","description":"To design websites and create functionality, developers use the markup and DOM that is rendered dynamically from the content. The markup and DOM are constructed ...","content":"style\ncontent\n\nMarkup, Sections, Blocks, and Auto Blocking\n\nTo design websites and create functionality, developers use the markup and DOM that is rendered dynamically from the content. The markup and DOM are constructed in a way that allows flexible manipulation and styling. At the same time it provides out-of-the-box functionality so the developer does not have to worry about some of the aspects of modern websites.\n\nStructure of a Document\n\nThe single most important aspect when structuring a document is to make it simple and intuitive for authors who will contribute the content.\n\nThis means that it is strongly recommended to involve authors very early in the process. In many cases it is good practice to just let authors put the content that needs to go on to a page into a Google Doc or Word document without having any notion of blocks and section, and then try to make small structural changes and introduce sections and blocks only where necessary.\n\nA document follows the following structure in the abstract.\n\nA page as authored in Word or a Google Doc document uses the well-understood semantic model like headings, body text, lists, images, links, etc. that is shared between HTML, markdown, and Google Doc / Word. We call this default content. In an ideal situation one would leave as much of the content authored as default content as possible, since this is the natural way for authors to treat documents.\n\nIn addition to default content, we have a concept of page sections, separated by horizontal rules or --- to group certain elements of a page together. There may be both semantic and design reasons to group content together. A simple case could be that a section of a page has a different background color.\n\nIn addition to that there is concept of blocks which are authored as a table with a heading as the first row that identifies the type of block. This concept is the easiest approach to componentize your code.\n\nSections can contain multiple blocks. Blocks should never be nested as it makes things very hard to use for authors.\n\nDOM vs. Markup\n\nAEM produces a clean and easily readable semantic markup from the content that’s provided to it. You can easily access it using the view source feature and have a look at the markup of the page you are currently reading.\n\nThe JavaScript library used in scripts.js takes the markup and enhances it into a DOM that is then used for most development tasks, specifically to build blocks. To create a DOM that’s easy to work with for custom project code, it is best to view it as a two-step process.\n\n\nIn the first step, we create the markup with sections, blocks, and default content that will look similar to this.\n\n\n\nIn the second step, the above mark-up is augmented into the following example DOM, which then can be used for styling and adding functionality. The differences between the markup that’s delivered from the server and the augmented DOM that is used for most of the development tasks is highlighted below.\n\nIt primarily consists of introducing a wrapper <div> for blocks and default content and dynamically adding additional helpful CSS classes and data attributes that are used by the AEM block loader.\n\nSections\n\nSections are a way to group default content and blocks by the author. Most of the time section breaks are introduced based on visual differences between sections such as a different background color for a part of a page.\n\nFrom a development perspective, there is usually not much interaction with sections beyond CSS styling.\nSections can contain a special block called Section Metadata, which results in data attributes to a section. The names of the data attributes can be chosen by the authors, and the only well-known section metadata property is Style which will be turned into additional CSS classes added to the containing section element.\n\nBlocks and default content are always wrapped in a section, even if the author doesn’t specifically introduce section breaks.\n\nDefault Content\n\nThere is a broad range of semantics that are shared between Word documents, Google Docs, markdown, and HTML. For example there are headings of different levels (eg. <h1> - <h6>), images, links, lists (<ul>, <ol>), emphasis (<em>, <strong>) etc.\n\nWe take advantage of the intuitive understanding that authors have of how to use these semantics in the tools that they are familiar with (eg. Word/Google docs) and maps those to markdown and then renders them in the HTML markup.\n\nAll mappings should be relatively straightforward and intuitive for the developer. One area where we go a little bit further than the simplest possible translation is in handling images. Instead of a simple <img> tag, a full <picture> tag is rendered with a number of different resolutions needed for display on desktop and mobile devices as well as different formats for modern browsers that support webp and older browsers which do not.\n\nBlocks\n\nMost of the project-specific CSS and JavaScript lives in blocks. Authors create blocks in their documents and developers write the corresponding code that styles the blocks with CSS and/or decorates the DOM to take the markup of a block and transform it to the structure that’s needed or convenient for desired styling and functionality.\n\nThe block name is used as both the folder name of a block as well as the filename for the CSS and JavaScript files that are loaded by the block loader when a block is used on a page. The block name is also used as the CSS class name on the block to allow for intuitive styling.\n\nThe JavaScript is loaded as a Module (ESM) and exports a default function that is executed as part of the block loading.\n\nAll block level CSS should be scoped to the block to make sure that there are no side-effects for other parts of your project, which means that all selectors in a block should be prefixed with the corresponding block class. In certain cases it makes sense to use the block’s wrapper or containing section for the selector as well.\n\nThere is a balance of DOM manipulation in JavaScript and complexity of the CSS selectors. Complex brittle CSS selectors are not recommended and at the same time adding classes to every element makes your code more complex and disregards the semantics of elements.\n\nOne of the most important tenets of a project is to keep things simple and intuitive for authors. Complicated blocks make it hard to author content, so it is important that developers absorb the complexity of translating an intuitive authoring experience into the DOM that is needed for layout or application logic. It is often tempting to delegate complexity to the author. Instead, developers should make sure that blocks do not become unwieldy to create for authors. An author should always be able to simply copy/paste a block and intuitively understand what it is about.\n\nA simple example is the Columns Block. It adds additional classes in JavaScript based on how many columns are in the respective instance created by the author. This allows it to be able to use flexible styling of content that is in two columns vs. three columns.\n\nBlocks can be very simple or contain full application components or widgets and provide a way for the developer to componentize their codebase into small chunks of code that can be managed easily and can be loaded on to the web pages as needed.\n\nA block’s content is rendered into the markup as nested <div> tags for the rows and columns that the author entered. In the simplest case, a block has only a single cell.\n\n<div class=”blockname”>\n  <div>\n     <div>\n      <p>Hello, World.</p>\n     </div>\n  </div>\n</div>\n\n\nAuthors can add blocks to their pages in an ad-hoc manner by simply adding a table with the block name in the first row or table heading. Some blocks are also loaded automatically. header and footer blocks that need to be present on every page of a site are a good example of that.\n\nBlock Options\n\nIf you need a block to look or behave slightly differently based on certain circumstances, but not different enough to become a new block in itself, you can let authors add block options to blocks in parentheses. These options add modified classes to the block. For example Columns (wide) in a table header will generate the following markup.\n\n<div class=”columns wide”>\n\nBlock options can also contain multiple words. For example Columns (super wide) will be concatenated using hyphens.\n\n<div class=”columns super-wide”>\n\nIf block options are comma-separated, such as Columns (dark, wide), they will be added as separate classes.\n\n<div class=”columns dark wide”>\n\nAuto Blocking\n\nIn an ideal scenario the majority of content is authored outside of blocks, as introducing tables into a document makes it harder to read and edit. Conversely blocks provide a great mechanism for developers to keep their code organized.\n\nA frequently-used mechanism to get the best of both worlds is called auto blocking. Auto blocking turns default content and metadata into blocks without the author having to physically create them. Auto blocking happens very early in the page decoration process before blocks are loaded and is a practice that programmatically creates the DOM structure of a block as it would come as markup from the server.\n\nAuto blocking is often used in combination with metadata, particularly the template property. If pages have a common template, meaning that they share a certain page design or functionality, that’s usually a good opportunity for auto blocking.\n\nA good example is an article header of a blog post. It might contain information about the author, the title of the blog post, a hero image, as well as the publication date. Instead of having the author put together a block that contains all that information, an auto block (e.g. article-header block) would be programmatically added to the page based on the <h1>, the first image, the blog author, and publication date metadata.\n\nThis allows the content author to keep the information in its natural place, the document structure outside of a block. At the same time, the developer can keep all the layout and styling information in a block.\n\nAnother very common use case is to wrap blocks around links in a document. A good example is an author linking to a YouTube video by simply including a link, while the developer would like to keep all the code for the video inline embed in an embed block.\n\nThis mechanism can also be used as a flexible way to include both external applications and internal references to video, content fragments, modals, forms, and other application elements.\n\nThe code for your projects auto blocking lives in buildAutoBlocks() in your scripts.js.\n\nPlease see the following examples of auto blocking.\n\nAdobe Blog\nAEM Boilerplate\n\nPrevious\n\nKeeping it 100\n\nUp Next\n\nSpreadsheets","lastModified":"1725864574","labs":""},{"path":"/developer/sitemap","title":"Sitemaps","image":"/developer/media_191790b100d361466b2b0a3dc149a79ecc6511102.jpg?width=1200&format=pjpg&optimize=medium","description":"Create automatically generated sitemap files to be referenced from your robots.txt. This helps with SEO and the discovery of new content. AEM can generate three ...","content":"style\ncontent\n\nSitemaps\n\nCreate automatically generated sitemap files to be referenced from your robots.txt. This helps with SEO and the discovery of new content. AEM can generate three types of sitemaps: without any configuration, based solely on a query index or based on a manual sitemap configuration. A single sitemap must be limited to 50,000 URLs and 50MB (uncompressed) in size - see Limits.\n\nCreating a Sitemap without any configuration\n\nIf you don’t do anything you will see your sitemap in sitemap.xml and have a sitemap index in sitemap.json. It will contain a list of all your published documents.\n\nIf you started with another type of sitemap and would like to switch to this type, you’ll have to delete the helix-sitemap.yaml configuration file - either manually defined in GitHub or automatically generated - and reindex your site.\n\nDomain name used in external URLs\n\nTo customize the domain used in creating external URLs, add a property named cdn.prod.host in your project configuration.\n\nIf you are using the configuration service, see here how to update the site configuration.\nOtherwise, see here for document-based configuration.\nGenerating a Sitemap configuration based on an index\n\nPlease see the document Indexing to learn more about indexing. In order to generate a sitemap configuration based on an index, please ensure that you have already set up an initial query index as explained there. This will generate a sitemap at the location:\n\nhttps://<branch>--<repo>--<owner>.hlx.page/sitemap.xml\n\nAnd a sitemap configuration at the following location:\n\nhttps://<branch>--<repo>--<owner>.hlx.page/helix-sitemap.yaml\n\n\nIt is recommended that you create a sitemap-index.xml file that references all your sitemaps and keep that as part of your project code in your github repo. This way it is easy to add new sitemaps as the project expands.\n\nManual setup of your Sitemap configuration\n\nIf you need more customization than your generated sitemap configuration file provides, you can copy its contents and paste it into a file named helix-sitemap.yaml in the root folder of your project.\n\nAlternatively, if you are using the configuration service, you can also update the sitemap.yaml via the site configuration.\n\nNote: When using a manually configured index and sitemap (e.g. your code repo includes a helix-query.yaml and helix-sitemap.yaml file) your index definition must include the robots property to ensure the sitemap excludes pages with robots: noindex metadata. When using auto-generated index definitions, simply follow the recommendations in the indexing documentation so those pages are excluded from the index.\n\nThe following sections contain the supported types of sitemaps.\n\nSimple Sitemap\n\nThe following is a simple helix-sitemap.yaml. It assumes a single index containing all the pages that need to appear in the sitemap.\n\n sitemaps:\n   example:\n     source: /query-index.json\n     destination: /sitemap-en.xml\n\n\nIf you want last modification dates to be included in the URLs of your sitemap, add a lastmod property including a format to your configuration.\n\n sitemaps:\n   example:\n     source: /query-index.json\n     destination: /sitemap-en.xml\n     lastmod: YYYY-MM-DD\n\nMultiple Sitemaps\n\nIt is common to have sitemaps per section of the sites and/or per country or language. AEM supports sitemaps including the corresponding hreflang references. In the following example we assume that there is a one to one mapping between the indexes and the sitemaps XML files.\n\n sitemaps:\n   example:\n     lastmod: YYYY-MM-DD\n     languages:\n       en:\n         source: /en/query-index.json\n         destination: /sitemap-en.xml\n         hreflang: en\n       fr:\n         source: /fr/query-index.json\n         destination: /sitemap-fr.xml\n         hreflang: fr\n         alternate: /fr/{path}\n\n\nIf there are two pages in the english and french section that share a common suffix, they will be related, so e.g. if you have a page /welcome in the english section and a page /fr/welcome in the french section, the resulting entry in the /sitemap-en.xml will look like this:\n\n<url>\n  <loc>https://wwww.mysite.com/welcome</loc>\n  <xhtml:link rel=\"alternate\" hreflang=\"en\" href=\"https://wwww.mysite.com/welcome\"/>\n  <xhtml:link rel=\"alternate\" hreflang=\"fr\" href=\"https://wwww.mysite.com/fr/welcome\"/>\n</url>\n\n\nA similar entry will be available in /sitemap-fr.xml.\n\nSpecifying the primary language manually\n\nThere might be situations where you have alternate versions of a page, but you’re unable to use a common suffix to identify them, possibly because you’re porting a legacy website that should not have its paths changed. In that situation, you can specify a primary-language-url for the alternate location, in the metadata of the document.\n\nLet’s assume our primary language is english, we have a page /welcome in the english section and /fr/bienvenu in the french section, and the latter is an alternate version of the former.\n\nFirst, we add that information to the document at /fr/bienvenu in its metadata:\n\n\nThis can also be added to a global metadata sheet, as shown in Bulk Metadata.\n\nThen, we add an indexed property primary-language-url to the french index:\n\n primary-language-url:\n   select: head > meta[name=\"primary-language-url\"]\n   value: attribute(el, \"content\")\n\n\nFinally, we re-publish the french page, and rebuild the sitemap.\n\nSpecifying the default language\n\nAnother common requirement is to specify the default language for a sitemap with multiple languages. This can be achieved by adding a property default in the sitemap:\n\n sitemaps:\n   example:\n     default: en\n     lastmod: YYYY-MM-DD\n     languages:\n       en:\n         source: /en/query-index.json\n         destination: /sitemap-en.xml\n         hreflang: en\n       fr:\n         source: /fr/query-index.json\n         destination: /sitemap-fr.xml\n         hreflang: fr\n         alternate: /fr/{path}\n\n\nIn the resulting sitemap, all entries from the english subtree will have an extra alternate entry with hreflang x-default.\n\nSpecifying multiple hreflangs for one subtree\n\nSometimes, it is required to map multiple hreflangs to only one language subtree, e.g. consider we want the following to appear in the resulting sitemap:\n\n<url>\n <loc>https://myhost/la/page</loc>\n <xhtml:link rel=\"alternate\" hreflang=\"es-VE\" href=\"https://myhost/la/page\"/>\n <xhtml:link rel=\"alternate\" hreflang=\"es-SV\" href=\"https://myhost/la/page\"/>\n <xhtml:link rel=\"alternate\" hreflang=\"es-PA\" href=\"https://myhost/la/page\"/>\n</url>\n\n\nEvery page in our sitemap source should appear exactly once, but have multiple alternate hreflangs associated with it. In order to achieve this, you should specify an array of languages in the hreflang property:\n\n sitemaps:\n   example:\n     lastmod: YYYY-MM-DD\n     languages:\n       la:\n         source: /la/query-index.json\n         destination: /sitemap-la.xml\n         hreflang:\n           - es-VE\n           - es-SV\n           - es-PA\n\nMultiple Indexes Aggregated Into One Sitemap\n\nThere are cases where it is easier to have a single larger sitemap than fragmented small sitemaps, especially as there is a limit of sitemaps that can be submitted to search engines per site.\n\nThe following example shows how to aggregate a number of separate indexes into a single sitemap.\n\n sitemaps:\n   example:\n     lastmod: YYYY-MM-DD\n     languages:\n       dk:\n         source: /dk/query-index.json\n         destination: /sitemap.xml\n         hreflang: dk\n         alternate: /dk/{path}\n       no:\n         source: /no/query-index.json\n         destination: /sitemap.xml\n         hreflang: no\n         alternate: /no/{path}\n\n\nUsing the same destination it is possible to combine multiple small sitemaps into one larger sitemap.\n\nIncluding other sitemaps as input\n\nIn a mixed scenario, where not all languages in a sitemap are managed in AEM, you can include sitemaps from other language trees by specifying an XML path as source, as in:\n\nsitemaps:\n   example:\n     lastmod: YYYY-MM-DD\n     languages:\n       en:\n         source: /en/query-index.json\n         destination: /sitemaps/sitemap-en.xml\n         hreflang: en\n       fr:\n         source: https://www.mysite.com/legacy/sitemap-fr.xml\n         destination: /sitemaps/sitemap-fr.xml\n         hreflang: fr\n         alternate: /fr/{path}\n\n\nIn this example, we use an external french sitemap to calculate all sitemap locations. AEM will determine alternates for english sitemap URLs by deconstructing the french counterparts in external sitemap using the alternate definition.\n\nAdding an extension to all locations in the sitemap\n\nIn a scenario, where you want all your locations to have an extension, e.g. .html, and you’re unable to generate a helix-sitemap sheet in your query index to derive a formula, you can add an extension to all languages or an individual language using an extension property:\n\nsitemaps:\n   example:\n     lastmod: YYYY-MM-DD\n     extension: .html\n     languages:\n       en:\n         source: /en/query-index.json\n         destination: /en/sitemap.xml\n         hreflang: en\n       fr:\n         source: /fr/query-index.json\n         destination: /fr/sitemap.xml\n         hreflang: fr\n         alternate: /fr/{path}\n\n\nPrevious\n\nPush Invalidation\n\nUp Next\n\nLaunch","lastModified":"1755523217","labs":""},{"path":"/developer/spreadsheets","title":"Spreadsheets and JSON","image":"/developer/media_10a516dc1e3a4c9b42aacb149e1bf202ea3e93b8c.jpeg?width=1200&format=pjpg&optimize=medium","description":"In addition to translating Google Docs and Word documents into markdown and HTML markup, AEM also translates spreadsheets (Microsoft Excel workbooks and Google Sheets) into ...","content":"style\ncontent\n\nSpreadsheets and JSON\n\nIn addition to translating Google Docs and Word documents into markdown and HTML markup, AEM also translates spreadsheets (Microsoft Excel workbooks and Google Sheets) into JSON files that can easily be consumed by your website or web application.\n\nThis enables many uses for content that is table-oriented or structured.\n\nSheets and Sheet structure\n\nThe simplest example of a sheet consists of a table that uses the first row as column names and the subsequent rows as data. An example might look something like this.\n\nAfter a preview and publish via the sidekick, AEM translates this table to a JSON representation which is served to requests to the corresponding .json resource. The above example gets translated to:\n\n{\n  \"total\": 4,\n  \"offset\": 0,\n  \"limit\": 4,\n  \"columns\": [\"Source\", \"Destination\"],\n  \"data\": [\n    {\n      \"Source\": \"/sidekick-extension\",\n      \"Destination\": \"https://chromewebstore.google.com/detail/aem-sidekick/igkmdomcgoebiipaifhmpfjhbjccggml\"\n    },\n    {\n      \"Source\": \"/github-bot\",\n      \"Destination\": \"https://github.com/apps/helix-bot\"\n    },\n    {\n      \"Source\": \"/install-github-bot\",\n      \"Destination\": \"https://github.com/apps/helix-bot/installations/new\"\n    },\n    {\n      \"Source\": \"/tutorial\",\n      \"Destination\": \"/developer/tutorial\"\n    }\n  ],\n  \":type\": \"sheet\"\n}\n\n\nAEM allows you to manage workbooks with multiple sheets.\n\nIf there is only one sheet, AEM will by default use that sheet as the source of the information.\nIf there are multiple sheets, AEM will deliver sheets that have names prefixed with shared-. This lets you keep sheets in the same workbook that will not be exposed.\nIf there is a sheet named shared-default, AEM will deliver it as a single sheet, unless there is a query parameter pointing to a different sheet.\nIf there are multiple sheets with shared- prefix, AEM will deliver them in multi-sheet format. See below for an example.\n\nNote: the helix- prefix is deprecated and the more neutral shared- prefix should be used.\n\nSee the query parameter section for details on how to query a specific sheet.\n\nMulti-Sheet Format\n\nIf there are multiple sheets with shared- prefix, AEM will deliver them in multi-sheet format. Here's an example of a payload with 2 sheets:\n\n{\n  \":names\": [\n    \"first\",\n    \"second\"\n  ],\n  \":type\": \"multi-sheet\",\n  \":version\": 3,\n  \"first\": {\n    \"total\": 0,\n    \"offset\": 0,\n    \"limit\": 0,\n    \"data\": [],\n    \"columns\": []\n  },\n  \"second\": {\n    \"total\": 0,\n    \"offset\": 0,\n    \"limit\": 0,\n    \"data\": [],\n    \"columns\": []\n  }\n}\n\nThe :names property contains an array with all the names of the contained sheets.\nFor each sheet name in :names there is a property in the payload. The value is the corresponding sheet data in single sheet format.\nQuery Parameters\nOffset and Limit\n\nSpreadsheets and JSON files can get very large. In such cases, AEM supports the use of limit and offset query parameters to indicate which rows of the spreadsheet are delivered. In case of a multi-sheet, offset and limit are applied to all sheets in the payload.\n\nAs AEM always compresses the JSON, payloads are generally relatively small. Therefore by default AEM limits the number of rows it returns to 1000 if the limit query parameter is not specified. This is sufficient for many simple cases.\n\nSheet\n\nThe sheet query parameter allows an application to specify one or multiple specific sheets in the spreadsheet or workbook. As an example ?sheet=jobs will return the sheet named shared-jobs and ?sheet=jobs&sheet=articles will return the data for the sheets named shared-jobs and shared-articles.\n\nSpecial Sheet Names\n\nIn certain use cases, AEM also writes to spreadsheets, where it expects specific sheet names:\n\nThe indexing service only writes to a sheet named raw_index, which may be delivered via JSON in a single sheet setup.\nArrays\n\nNative arrays are not supported as cell values, so they are delivered as strings.\n\n\"tags\": \"[\\\"Adobe Life\\\",\\\"Responsibility\\\",\\\"Diversity & Inclusion\\\"]\"\n\nYou can turn them back into arrays in JavaScript using JSON.parse().\n\nPrevious\n\nMarkup - Sections\n\nUp Next\n\nPublish","lastModified":"1743417022","labs":""},{"path":"/docs/setup-customer-sharepoint","title":"How to use Sharepoint (application)","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"If you use SharePoint as your content source, AEM uses a registered Microsoft Azure application to access your content. This application has delegated permissions defined ...","content":"style\ncontent\n\nHow to use Sharepoint (application)\n\nNOTE: for projects using Adobe’s Sharepoint please continue here.\n\nIf you use SharePoint as your content source, AEM uses a registered Microsoft Azure application to access your content. This application has delegated permissions defined that allow the service to access SharePoint on behalf of a user. This user needs to be registered to the project that is using SharePoint.\n\nAlternatively, the services can also authenticate as an application and use application permissions to access the sites. This needs additional setup by a SharePoint site administrator that can grant the permissions for the application.\n\nThe preferred setup is to use application permissions, as this narrows down the access the service has to a specific SharePoint site and does not require to share any secrets of a technical user. Also, it reduces the problems around password rotation.\n\nThe following describes how to set up application permissions for your project. If you want to set up a technical user, please continue here.\n\nSetting up SharePoint involves the following steps:\n\nCreate or identify a Sharepoint site that will serve as site for the document based authoring\nCreate a folder within SharePoint that will be the website root.\nSet the content source in your site configuration\nAccess the Registration Portal\nRegister the Application\nApply the sites.selected permission to the SharePoint site\n1. Create or identify a Sharepoint site\n\nTalk to your IT department to either identify or create a Sharepoint site that will be used for document based authoring. One site can “host” multiple websites (projects). This site will later receive the respective permissions so that the publishing services can access it.\n\n2. Create the website root folder\n\nNavigate to your desired location in the SharePoint site created or identified above and create a root folder that will be your website root. It is best to not use a SharePoint list root directly, so that you have a shared space for your authors to put collateral documents, for example a drafts folder, or how-to-author documentations.\n\nAn example file structure might look like this, using the website folder as the root:\n\n3. Set the content source in your site configuration\n\nThe next step is to set the source property in your site configuration to the corresponding Sharepoint URL\n\n(see: https://www.aem.live/docs/admin.html#schema/SiteConfig for more detail)\n\nThe expected format usually looks like the example below.\n\nhttps://<tenant>.SharePoint.com/sites/<sp-site>/Shared%20Documents/website\n\n\n\nThis might vary depending on how you create the SharePoint site and lists. In order to obtain the url, the simplest way is to copy-past the first part from the browser address, eg:\n\n\n\nAnd then add the rest manually (Note, that copying the sharelink via the UI adds unnecessary information and it is better to use a canonical representation of the url). Once you compose the url, you can test it by entering it again in the browser. You should end up in the folder view of your website root.\n\nAfter that, update the source in your site config service via the API accordingly.\n\nAn easy way to make this change is to use https://labs.aem.live/tools/site-admin/index.html to either create or update a site configuration.\n\nFor example:\n\n \n\nHit save to persist the change.\n\n4. Access the Registration Portal\nOverview\n\nIn order for the AEM service to access the authored content it needs a couple of information and setup. The AEM service (a cloud function) accesses the MS Graph API on behalf of an application (or configured user). In order to do so, it needs to authenticate first in the context of an Application. This is important because the scopes given to the application define what permission the service has on the MS Graph API. For example, it should be allowed to read and write documents, but not to alter access control.\n\nAn application is represented as an “Enterprise Application” in the respective Active Directory of a tenant. The permissions given to that enterprise application ultimately define what the service can access in that tenant’s resources. Certain permissions need to be approved by an Active Directory administrator before a user can use the application. This so-called “admin consent” is a mechanism to verify and control which permissions apps can have. This is to prevent dubious apps from tricking users into trusting an app that is not official. Having the extra admin consent step allows IT security to control which apps the employees can use.\n\n1. Sign-in in the Registration Portal\nView Enterprise Applications in Azure Portal\n\nAssuming that so far no AEM Enterprise Applications are present in Azure (Microsoft Entra Id)\n\nAccess The Registration Portal\n\nGo to https://admin.hlx.page/register, enter the github url of the project or the org/site values of your site config.\n\nSign-in as non admin user\n\nSign in as a user that does not have admin permissions will show an error that it needs approval, i.e. the application needs admin consent.\n\nProblem: Enterprise Application is not registered if a user never logs in.\n\nSign-in as admin user\n\nOne solution is to sign in as a user that does have admin permissions:\n\n(note, at this point the Enterprise Application is still not registered in azure)\n\nAEM Content Integration Registration visible in UI\n\nIf the admin logs in (without checking the checkbox and granting consent for everyone), the Enterprise application is present.\n\nCreate application using MSGraph or Powershell\n\nAlternatively, you can create the Enterprise application via MSGraph or Powershell.\n\nIn order to make it visible in the azure UI you also need to add the WindowsAzureActiveDirectoryIntegratedApp tag. This can be done directly when creating the application.\n\nUsing graph explorer:\n\nPOST https://graph.microsoft.com/v1.0/servicePrincipals\nContent-type: application/json\n{\n    \"appId\": \"e34c45c4-0919-43e1-9436-448ad8e81552\",\n    \"tags\": [\n        \"WindowsAzureActiveDirectoryIntegratedApp\"\n    ]\n}\n\n\nUsing powershell:\n\nPS> connect-MgGraph -Scopes \"Application.ReadWrite.All\"\nPS> New-MgServicePrincipal -AppId e34c45c4-0919-43e1-9436-448ad8e81552 -Tags WindowsAzureActiveDirectoryIntegratedApp\n\n\nAfter that you still need to give admin consent, if you want a non admin user to login.\n\n\nAlso see:\n\nhttps://learn.microsoft.com/en-us/entra/identity/enterprise-apps/create-service-principal-cross-tenant\nhttps://learn.microsoft.com/en-us/entra/identity/enterprise-apps/add-application-portal-configure?pivots=ms-graph\n\nReview permissions\n\nNote that the AEM Content Integration Registration (e34c45c4-0919-43e1-9436-448ad8e81552) application is only needed during registration to verify that the user has read access to the sharepoint. It has the following delegated permissions:\n\nOpenid\nAllows users to sign in to the app with their work or school accounts and allows the app to see basic user profile information.\nProfile\nAllows the app to see your users' basic profile (e.g., name, picture, user name, email address)\nFiles.ReadWrite.All\nAllows the app to read, create, update and delete all files the signed-in user can access.\nUser logged in Registration portal\n\nAfter completing this initial step, the user is logged in the registration portal\n\nVerify write access to the content source via challenge file\nDownload the challenge file\n\nBefore you can register (or change) the registration. You have to prove that you have write access to the respective sharepoint location. For that you need to upload a text file containing the mentioned content. This can easily be done by downloading the file and drop it into the sharepoint folder.\n\nAfter that, click on Validate to continue the registration.\n\nAdding the AEM Content Integration App with application permissions\nAdd Enterprise Application\n\nWhen logged in the registration portal, the content source that is used by the project needs to be connected to an oauth grant for the AEM Content Integration application. This is needed, so that the system can access the documents in sharepoint and convert them to an internal format (markdown) and store it in Adobe’s storage (S3/R2) for fast delivery.\n\nUsing application sites.selected permissions is more secure as it limits the scope to a single sharepoint site. In order to connect, click on the Connect Application button.\n\nIf you never registered an application or a user before, you probably see the following error:\n\nUnable to validate access: General exception while processing\n\n\nor\n\nUnable to validate access: Either scp or roles claim\nneed to be present in the token.\n\n\nSame as above, the Enterprise application for the AEM Content Integration (83ab2922-5f11-4e4d-96f3-d1e0ff152856) is not present in Azure yet,\n\nIn order to add it, use the graph explorer or powershell to add it:\n\n\nUsing graph explorer:\n\nPOST https://graph.microsoft.com/v1.0/servicePrincipals\nContent-type: application/json\n{\n    \"appId\": \"83ab2922-5f11-4e4d-96f3-d1e0ff152856\",\n    \"tags\": [\n        \"WindowsAzureActiveDirectoryIntegratedApp\"\n    ]\n}\n\n\nUsing powershell:\n\nPS> connect-MgGraph -Scopes \"Application.ReadWrite.All\"\nPS> New-MgServicePrincipal -AppId 83ab2922-5f11-4e4d-96f3-d1e0ff152856 -Tags WindowsAzureActiveDirectoryIntegratedApp\n\n\n\n\nAlso see:\n\nhttps://learn.microsoft.com/en-us/entra/identity/enterprise-apps/create-service-principal-cross-tenant\nAdd Application Roles\n\nNow the enterprise application AEM Content Integration is visible in azure. But it doesn’t have any Sites.Selected application permissions.\n\nProblem: Using the admin consent UI would grant all application and delegated permissions, which we don’t want.\n\nAn easy way is to consent to all permissions and then remove the delegated ones again.\n\nAdd Application Roles using Powershell or Graph Explorer\n\nAlternatively, adding the app roles can be done with the following steps:\n\nFind the service principal of the enterprise application (principalId). This is the one you created above.\nFind the service principal of the Microsoft Graph API (resourceId)\nFind the Id for the Sites.Selected Application Role (appRoleId)\nAssign the Application Role to the Managed Identity (The Enterprise Application)\n\nUsing powershell this can be done with:\n\n$ObjectId = \"abcdef-1234-49b6-b660-cc85b34fe516\"    <<------ replace with your enterprise app id\n$AAD_SP = Get-AzureADServicePrincipal -SearchString \"Microsoft Graph\";\n$AAD_SP\n \n$MSI = Get-AzureADServicePrincipal -ObjectId $ObjectId\nif($MSI.Count -gt 1)\n  { \n  Write-Output \"More than 1 principal found, please find your principal and copy the right object ID. Now use the syntax $MSI = Get-AzureADServicePrincipal -ObjectId <your_object_id>\"\n  Exit\n  }\n \n$AAD_AppRole = $AAD_SP.AppRoles | Where-Object {$_.Value -eq \"Sites.Selected\"}\nNew-AzureADServiceAppRoleAssignment -ObjectId $MSI.ObjectId  -PrincipalId $MSI.ObjectId  -ResourceId $AAD_SP.ObjectId[0]  -Id $AAD_AppRole.Id\n\n\n\nUsing graph explorer it involves more steps:\n\nFind the principal of the enterprise application for AEM Content Integration :\n  GET https://graph.microsoft.com/v1.0/servicePrincipals?$filter=appId eq '83ab2922-5f11-4e4d-96f3-d1e0ff152856' \n...\n  \"value\": [\n        {\n            \"id\": \"6761ada0-733b-4a02-98b2-3db970834fe0\",\n...\n\n\nThis will be our principalId\n\nFind the service principal of the Microsoft Graph API\nGET https://graph.microsoft.com/v1.0/servicePrincipals?$filter=appId eq '00000003-0000-0000-c000-000000000000'\n...\n  \"value\": [\n        {\n            \"id\": \"5159db96-7193-414e-9730-b1d1e4448443\",\n...\n\n\nThis is the resourceId. (the resource that defines the application role)\n\nFind the id of the application role.\n\nGET https://graph.microsoft.com/v1.0/servicePrincipals/${resourceId}/appRoles\n\n\nSubstitute the resourceId with the service principal of the Microsoft Graph API as obtained from the previous step.\n\nGET https://graph.microsoft.com/v1.0/servicePrincipals/5159db96-7193-414e-9730-b1d1e4448443/appRoles\n...\n        {\n            \"allowedMemberTypes\": [\n                \"Application\"\n            ],\n            \"description\": \"Allow the application to access a subset of site collections without a signed in user.  The specific site collections and the permissions granted will be configured in SharePoint Online.\",\n            \"displayName\": \"Access selected site collections\",\n            \"id\": \"883ea226-0bf2-4a8f-9f9d-92c9162a727d\",\n            \"isEnabled\": true,\n            \"origin\": \"Application\",\n            \"value\": \"Sites.Selected\"\n        },\n...\n\n\nFind the entry for Sites.Selected. This Id is the appRoleId\n\nAssign the Application Role to the Managed Identity. The request has the format:\nPOST https://graph.microsoft.com/v1.0/servicePrincipals/${principalId}/appRoleAssignedTo\nContent-Type: application/json\n\n{\n  \"principalId\": \"${principalId}\",\n  \"resourceId\": \"${resourceId}\",\n  \"appRoleId\": \"${appRoleId}\"\n}\n\n\nFor example:\n\nhttps://graph.microsoft.com/v1.0/servicePrincipals/6761ada0-733b-4a02-98b2-3db970834fe0/appRoleAssignedTo\nContent-type: application/json\n{\n    \"principalId\": \"6761ada0-733b-4a02-98b2-3db970834fe0\",\n    \"resourceId\": \"5159db96-7193-414e-9730-b1d1e4448443\",\n    \"appRoleId\": \"883ea226-0bf2-4a8f-9f9d-92c9162a727d\"\n}\n\nValidate Permissions\n\nEventually you should see the granted application permission in the UI.\n\nBack in the registration portal, they should have changed to:\n\nThe content source can't be found or you don't have permission to access it. Please check that the URL is correct, the app \"AEM Content Integration (83ab2922-5f11-4e4d-96f3-d1e0ff152856)\" has the right permissions, and that the user is allowed to access it..\n\n\n\nAdd permissions to Sharepoint Site\n\nIn order to add the permissions to the sharepoint site, we need to find its SiteId.\n\nThis can be done using again the graph explorer:\n\nGET https://graph.microsoft.com/v1.0/sites/{host-name}:/{server-relative-path}\n\n\n\nExample:\n\nGET https://graph.microsoft.com/v1.0/sites/adobeenterprisesupportaem.sharepoint.com:/sites/hlx-test-project\n\n{\n...\n    \"id\": \"adobeenterprisesupportaem.sharepoint.com,03cc3587-0e4d-405e-b06c-ffb0a622b7ac,5fbc1df5-640c-4780-8b59-809e3193c043\",\n...\n}\n\n\n\nUsing the Id we can set the permissions:\n\nPOST https://graph.microsoft.com/v1.0/sites/adobeenterprisesupportaem.sharepoint.com,03cc3587-0e4d-405e-b06c-ffb0a622b7ac,5fbc1df5-640c-4780-8b59-809e3193c043/permissions\nContent-type: application/json\n\n{\n    \"roles\": [\n        \"write\"\n    ],\n    \"grantedToIdentities\": [\n        {\n            \"application\": {\n                \"id\": \"83ab2922-5f11-4e4d-96f3-d1e0ff152856\",\n                \"displayName\": \"AEM Content Integration\"\n            }\n        }\n    ]\n}\n\n\n\nNote: If you get an “Access Denied” error while executing the above request , you need to have “Site Admin” permissions in order to run the above step. Also you may need to give additional “consent” from the Graph Explorer’s “Modify Permissions” panel for additional “Sites” scopes.\n\nAfter that, the registration portal should show canRead: ok","lastModified":"1753706982","labs":""},{"path":"/docs/custom-headers","title":"Custom HTTP Response Headers","image":"/docs/media_10acfc1a9e5c8bbbb728e0b1e6dc193847c000b0c.jpg?width=1200&format=pjpg&optimize=medium","description":"In some cases, it is useful to apply custom HTTP response headers to resources, for example to allow CORS. Headers can be specified in the ...","content":"style\ncontent\n\nCustom HTTP Response Headers\n\nIn some cases, it is useful to apply custom HTTP response headers to resources, for example to allow CORS. Headers can be specified in the headers object of your site configuration like this:\n\n{\n  \"/fragments/**\": [\n    {\n      \"key\": \"access-control-allow-origin\",\n      \"value\": \"https://www.example.com\"\n    }\n  ]\n}\n\n\nTry our HTTP Headers Editor or see here for instructions how to apply custom headers from your command line.\n\nIf your site does not use the configuration service yet, see here for legacy instructions.\n\nThe URL property is a glob pattern matching the pages the custom header should be applied to. A wildcard * can be used as a prefix or suffix, allowing for flexible matches on the URL pathname. You can use ** for deep path matching. Typical examples include /foo/** or **/bar/**.\n\nSecurity Warning\n\nAdding an access-control-allow-origin header with value * may render your protected preview and live environments vulnerable to CSRF attacks against authors logged in via AEM Sidekick, which can potentially lead to disclosure of sensitive information. If you do have a use case where it is not possible to restrict access to a single origin, try to limit the header to specific URLs to minimize the attack surface.\n\nNote that the headers are applied to both the preview and live versions of the content. For changes applying to content, caches will be purged automatically and your changes will take effect immediately. If the changes apply to code resources, they won’t take effect until the next code sync, either by updating the code in the main branch of the repository or by manually triggering a resync via Admin API.\n\nPrevious\n\nBlock Party\n\nUp Next\n\nIndexing","lastModified":"1761299161","labs":""},{"path":"/docs/authentication-setup","title":"Authentication Overview","image":"/docs/media_13670a27a562dc83e8626819b5054b83a727f3bcd.png?width=1200&format=pjpg&optimize=medium","description":"Learn how to enable authentication on an AEM site.","content":"style\ncontent\n\nAuthentication Overview\n\nDepending on your setup, these options exist for requiring authentication for visitors to your site. A typical use case for this would be an intranet.\n\nVisitors\nSite Authentication for your visitors on aem.live\n\nThe easiest way to set up an intranet\n\nSite Authentication for your visitors when using AEM Authoring\n\nWhen you author using AEM Sites and Universal Editor, you also must enable it in your AEM environment.\n\nSite Authentication with Cloudflare Zero Trust\n\nCloudflare Zero trust offers a powerful Identity Provider to be used with aem.live\n\nAuthors\n\nFollow the instructions below to set up authentication for authors if you are using Sidekick and the Admin API. Setting up authentication is a requirement for enforcing user roles and permissions.\n\nAuthentication for your Authors\n\nAuthentication for Authors using Sidekick and the Admin API","lastModified":"1769441983","labs":""},{"path":"/docs/configuration","title":"Document-based Project Configuration","image":"/docs/media_15797f15710852969aba8d27f25800586232b1e1d.png?width=1200&format=pjpg&optimize=medium","description":"Learn how to configure your project","content":"style\ncontent\n\nDocument-based Project Configuration\nWe don't recommend using document based project configuration. Please use Configuration Service and corresponding tools on https://tools.aem.live .\n\nThe project configuration file is located in /.helix/config.xlsx (for sharepoint) or /.helix/config (for google drive). It consists of a table using Key and value columns. For example:\n\nThe format of the keys follows the identifier-dot notation, like in javascript. You can think of the sheets as a flattened JSON structure. If a key appears more than once, it would form an array, eg:\n\nWill conceptually be a structure like:\n\n{\n  \"access\": {\n    \"allow\": [\n      \"*@adobe.com\",\n      \"*@example.com\",\n    ]\n  }\n}\n\n\nThe following table lists the configuration options of a project.\n\nKey\t Comment\t Example \n name\t Name of the project used by the slack bot when reporting.\t Franklin Website \n slack\t slack channel for this project\t T03DFTYDQ/C12U1A8480Q \n host\t host displayed in slack-bot info\t www.example.com \n timezone\t timezone used by slack-bot\t Europe/Zurich \n cdn.prod.host\t CDN host name for production environment\t www.example.com \n cdn.prod.type\t CDN type\t fastly \n cdn.prod.route\t Route or routes on the CDN that are rendered with Franklin\t /site \n cdn.prod.serviceId\t Fastly specific: service ID\t 1234 \n cdn.prod.authToken\t Fastly specific: API Token\t \n cdn.prod.endpoint\t Akamai specific: Endpoint\t \n cdn.prod.clientSecret\t Akamai specific: Client secret\t \n cdn.prod.clientToken\t Akamai specific: Client token\t \n cdn.prod.accessToken\t Akamai specific:Access token\t \n cdn.prod.origin\t Cloudflare specific: origin\t \n cdn.prod.plan\t Cloudflare specific: plan\t \n cdn.prod.zoneId\t Cloudflare specific: zone id\t \n cdn.prod.apiToken\t Cloudflare specific: api token\t \n cdn.preview.host\t Custom CDN host name for preview environment\t preview.example.com \n cdn.live.host\t Custom CDN host name for live environment\t live.example.com \n access.allow\t The email glob of the users that are allowed. This will enable site authentication if set.\t *@adobe.com \n access.require.repository\t The list of owner/repo pointers to projects that are allowed to use this content.\t adobe/helix-website \n admin.role.author\t The email glob of the users with the author role.\t *@adobe.com \n admin.role.publish\t The email glob of the users with the publish role.\t *@adobe.com\n\nAlso see the JSON Schema and Typescript Types of this config.\n\nNote: Activate the configuration using the Sidekick for your configuration changes to become active.\n\nPrevious\n\nConfiguration Service","lastModified":"1758316636","labs":""},{"path":"/docs/translation-and-localization","title":"Translation and Localization","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Managing internationalization (i18n) and localization (l10n) in Adobe Experience Manager","content":"style\ncontent\n\nTranslation and Localization\nThe Problem\n\nMany websites need to support multiple languages as well as multiple countries, markets or regions.\n\nIn the past decades the dominant approach has been to operate in a “region” or “market” first approach.\n\nThis results in an information architecture that looks something like this…\n\nhttps://<domain>/ca/\nhttps://<domain>/ca_fr/\n\n\n… or …\n\nhttps://<domain>/en-ca/\nhttps://<domain>/en-us/\n\n\nUsually this leads to both a duplication of content for many locales that share the same language and the loss of someone in a particular region / market to read content in the language that they are most comfortable in, for example, for a country like Switzerland only German, French and Italian are supported which leaves a English speaker in Switzerland out of luck.\n\nFrom the outside: SEO impact and visitor impact\n\nThe duplication of content from a public web and SEO perspective is very undesirable as it pollutes public search indexes with redundant information and makes a company lose control of where their audience is sent from the SERP, as search engines will use multiple signals to identify the canonical URLs of a piece of content.\n\nSince users speak, search and think for content in languages for most websites it makes more sense to operate in a language first approach, that lets users interact with the content based on a language not based on a market.\n\nThere are certain cases and verticals where there is very little or no overlap of the content that is produced for individual markets in which case of course a market/region based approach is more appropriate. These include industries where branding and regulation is vastly different between countries, but for most international websites that share a lot of the same content across markets it is much more intuitive for the visitor of a website to consume the content by language, and have the ability to consume the content of a website in any language that it is available in, independently of their current geographic location.\n\nOn the Inside: The content management problem\n\nIn the past a lot of systems have optimized that ability to create and manage a lot of duplicate content. Given the above statement on the problem of duplicate content this clearly seems like the wrong approach. Beyond creating a subpar SEO and site visitor experience this also contributes to an amplified management problem as content is copied around many times, and no matter how good the tools are this leads to potential conflicts and issues that could have been avoided.\n\nBest Practices around Translation and Localization\nURL and Content Structure\n\nIn AEM we recommend a content and externally visible URL structure that works in two tiers.\n\nThe first level focuses on language and the second level focuses on market/region.\n\nSomething like:\n\nhttps://<domain>/en/\nhttps://<domain>/en/us/\nhttps://<domain>/en/apac/\nhttps://<domain>/es/us/\nhttps://<domain>/es/mx/\n\n\nUsing hierarchical fallbacks eg. / contains english content or /en/ automatically contains US content are definitely reasonable and should be based on a company's business focus. So for a company whose business is predominantly in the english speaking US, / should serve up the english speaking website for the US.\n\nThis is equally applicable for spreadsheet based resources, as for documents. In some cases it may make sense to override language settings with per market settings. A simple example for that is placeholders, where most of the tokens would live in the language folder and would be overridden from the corresponding market folder.\n\nLanguage and Market detection\n\nAutomatically detecting the language and the market is only a relatively small portion of any multi language and multi market architecture. It cannot be done perfectly and therefore the user should always have the opportunity to self select themselves into a particular market (and language) and keep that decision persisted on their device. There is also legislation that forbids geofencing, which further deemphasizes the importance of automatically trying to detect markets.\n\nDetecting the language is really only relevant if the user in question is a new user who is not following a link from search engine or any other language sensitive context. A user from a search engine will automatically have search content in the language that they would like to get the result in, a user from a paid or social channel will automatically be in the correct language context and will be sent to the corresponding content. As a fall back for language detection the user's browser / operating system language is probably a good bet, but should only be used if there is no other indication.\n\nDetecting a user's market / region is often needed to make sure that the content is localized, but also making sure that commercial offers, currency, legal context with respect to privacy etc. are set up the right way and the website responds correctly. Once the market is detected or selected that’s what will be used for all external services that are market specific as well, for example E-Commerce systems.\n\nUsually a good indication is some form of geo IP or the browser Geo Location API. This should be done completely separately from the language of the content that the consumer is interacting with. It is good practice to allow a user to self select themselves with a region switcher into a different market/region and persist that information on the visitors device.\n\nHandling Translation\n\nSince a lot of the content in AEM is based on content that is created in word, google docs and spreadsheets there is a broad range of translation support available. There is built-in support for machine translation in both Microsoft Office and Google Workspace applications but since Office formats are extremely common for all translation services and providers there are existing integrations for bulk translation and translation memories that are readily available.\n\nAt scale, in situations where translation processes are standardized with internal, bespoke tooling or APIs for very large organizations with a continuous flow of translation needs, we recommend using Microsoft Office or Google Workspace Automation and APIs to connect those.\n\nBuilt-in document comparison tooling and intuitive accessible versioning allows for great cherry picking of new content or translations.\n\nHandling Localization\n\nMarket specific content should be hosted in the corresponding folder identifying the market, country or region. This is particularly valuable if only a small fraction of the content is localized.\n\nIn many cases this includes high traffic pages (eg. landing pages, homepages), local campaign experiments, legal considerations (eg. privacy statements, terms and conditions etc.) as well as settings for a market (eg. currency, etc.).\n\nThe goal of this setup is that only content that is actually localized exists for any given market.\n\nDepending on SEO and other needs it may be more advisable to expose the external URLs per locale or decorate the market/region specific content on the same URL, but in many cases it may make sense to expose the market in externally visible URL, something like /en/ca/ for the Canadian homepage.\n\nLink rewriting for localization in practice\n\nAs per the above, considering that a we will have a market context / affinity stored on the users device (sessionStorage, localStorage or cookie) we have to process the links that are in content (and code) to make sure that the links that are shown to the user are pointing to the correct content for the market.\n\nA simple starting point usually are header and footer which are very commonly adjusted for on a per market basis, which means that there is a copy of the corresponding documents in the respective market folder and need to automatically be fetched properly by the code that displays the navigation and the footer. Beyond that cross links and CTAs need to be checked for the existence of content in that locale before they are followed.\n\nThe technical implementation of that is usually quite straight forward and relies on a AEM index of all the content that has been localized and made available for a particular market and either an eventhandler for click events and/or rewriting of href attributes, as well as some observation for a given market in content fragment or similar fetch requests.\n\nDetailed Example\n\nIn an example where a website uses english content written the US market as the default and has some minimal content eg. the homepage (index), localized nav and footer that is localized for the UK market the content structure would something like this (in sharepoint):\n\n/en/index.docx\n/en/brands.docx\n/en/footer.docx\n/en/industries.docx\n/en/our-company.docx\n/en/nav.docx\n/en/products.docx\n/en/services.docx\n/en/solutions.docx\n.\n.\n.\n/en/uk/index.docx\n/en/uk/nav.docx\n/en/uk/footer.docx\n\n\nThis translates to a corresponding URL space of www.mycompany.com/en/ for the US homepage and www.mycompany.com/en/uk/ for the UK homepage. For a visitor from the UK market (detected or self-selected as mentioned above) the localized nav and footer are loaded independently of where they navigate on the site.\n\nA UK visitor to www.mycompany.com/en/brands would see the localized navigation and footer with the corresponding links to additional UK content where needed. Beyond that all the inline links in the /en/ content, that point to content that is also available in the /en/uk/ tree (eg. the homepage in this case) would be dynamically rewritten to point to the corresponding localized version.\n\nThe sitemap with HREFLang support would look something like this:\n\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n <urlset xmlns=\"http://www.sitemaps.org/schemas/sitemap/0.9\" xmlns:xhtml=\"http://www.w3.org/1999/xhtml\">\n  <url>\n   <loc>https://www.mycompany.com/en/</loc>\n   <lastmod>2022-04-21</lastmod>\n   <xhtml:link rel=\"alternate\" hreflang=\"en\" href=\"https://www.mycompany.com/en/\"/>\n   <xhtml:link rel=\"alternate\" hreflang=\"en-GB\" href=\"https://www.mycompany.com/en/uk/\"/>\n  </url>\n\n  <url>\n   <loc>https://www.mycompany.com/en/brands</loc>\n   <lastmod>2022-04-21</lastmod>\n   <xhtml:link rel=\"alternate\" hreflang=\"en\" href=\"https://www.mycompany.com/en/brands\"/>\n  </url>\n.\n.\n.\n</urlset>\n\nLive Example\n\nA good example of that implementation is blog.adobe.com.\n\nWith the following languages and locales:\n\nhttps://blog.adobe.com/de/\nhttps://blog.adobe.com/es/\nhttps://blog.adobe.com/en/apac\nhttps://blog.adobe.com/en/uk\nhttps://blog.adobe.com/fr/\nhttps://blog.adobe.com/it/\n...","lastModified":"1725864574","labs":""},{"path":"/developer/github-actions","title":"Using GitHub Actions to handle Publication Events","image":"/developer/media_1fa0fb2d99576d6c8442fd23c09c244eba66b82e4.png?width=1200&format=pjpg&optimize=medium","description":"Franklin has a lightweight integration with GitHub actions that allows you to run a GitHub actions workflow whenever a page or sheet in Franklin has ...","content":"style\ncontent\n\nUsing GitHub Actions to handle Publication Events\n\nFranklin has a lightweight integration with GitHub actions that allows you to run a GitHub actions workflow whenever a page or sheet in Franklin has been published or unpublished. As GitHub actions is a powerful runtime for all kinds of integrations, you can use it as a springboard to further integrations, for instance using WebHooks, API calls, or even other GitHub workflows.\n\nFranklin can send resource-published and resource-unpublished events to your GitHub repository, where they can trigger a GitHub actions workflow\nFrom there, you can perform further processing, apply conditions, call other APIs or workflow steps\nThe key starting point is to listen for the repository_dispatch event https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#repository_dispatch\nYou can find an example workflow on the Franklin Website https://github.com/adobe/helix-website/blob/main/.github/workflows/log-publish.yml (it sends a notification to Slack)","lastModified":"1725864574","labs":""},{"path":"/docs/authentication-setup-authoring","title":"Configuring Authentication for Authors","image":"/docs/media_1440edf7c6f082e7b36d324d1ed8927febc5e8e6e.png?width=1200&format=pjpg&optimize=medium","description":"Learn how to enable authentication for AEM authors.","content":"style\ncontent\n\nConfiguring Authentication for Authors\nEnable Authentication for Authors\n\nBy default, authors don’t need to be logged in in order to use AEM via Sidekick. In order to enable authentication, it is sufficient to add relevant access-statements to your site configuration. Upon encountering said access-statements, the Sidekick will enforce authentication with the respective provider: Microsoft for Sharepoint based projects, and Google for Google Drive based projects.\n\nStep 1: Add Access Allow To Configuration\n\nUse the configuration service to update the access/admin section of your site configuration.\n\nAdd admin/role/author and or admin/role/publish array properties for each individual user or wildcard-domain you’d like to give access to the site for.\n\nExample for an individual user: admin/role/publish: [ \"some.user@example.com\" ]\nExample for a wildcard domain: admin/role/author: [ \"*@example.com\" ]\n\nThe following example would grant author access all users within the “example.com” domain and publish rights to a the single user “some.user@example.com”:\n\ncurl -X POST https://admin.hlx.page/config/acme/sites/website/access/admin.json \\\n  -H \"content-type: application/json\" \\\n  -H \"x-auth-token: <your-auth-token>\" \\\n  --data '{\n\t\"role\": {\n         \"author\": [ \"*@example.com\" ],\n         \"publish\": [ \"some.user@example.com\" ],\n\t}\n}'\n\n\n\nEnsure that the users are able to authenticate themselves using their login credentials as follows:\n\nIf using AEM or Document Authoring, they must be able to authenticate with their credentials using Adobe authentication..\nIf using Google Drive, they must be able to authenticate with their credentials using Google authentication.\nIf using Microsoft SharePoint, they must be able to authenticate with their credentials using Microsoft authentication.\nStep 2: Login via Sidekick\nThe next time the Sidekick opens on a document, it will show a \"Sign In\" option:\n\nOnce you click it, it will open a new browser tab, redirecting to your respective provider:\n\nThe first time, it will ask for consent that the Admin service can access your Sharepoint or Google data. In case you are not an admin on the account, you will see the following message:\n\nIn this case, ask an Active Directory admin of your organization to login via Sidekick or directly via the admin link: https://admin.hlx.page/auth/microsoft\n\nThe admin can either grant consent directly by checking the ‘Consent on behalf of your organization’ when they log in, or later via the Azure Portal.\n\nNow, the non admin user should be able to login:\nStep 2 (Alternative) Granting Admin Consent via Azure Portal\n\nIn order to grant admin consent, open the Azure portal and go to:\nHome → Active Directory → Enterprise Applications\n\n(Note that the application is only visible, if an admin has logged in in the steps above)\n\nSearch for the AEM Content Integration Admin API:\n\nIt should have the application id shown above.\n\nSelect the Permissions tab (below security):\n\nAnd click on Grant admin consent for {your organization}.\nAfter clicking Accept you can refresh the Permissions blade a few times, until the consented permissions show up:\nUsing the Admin Service (admin.hlx.page)\n\nWhen authentication is enabled for admin.hlx.page using the API endpoint with tools like curl will require to use a proper auth token. For one time ad-hoc use by developers it is very convenient to just copy/paste the x-auth-token header from your browser's network tab from an authenticated request sent by sidekick to admin.hlx.page and pass it into the curl via the -H option. eg:\n\ncurl -v -H \"x-auth-token: id_token=...\" \"https://admin.hlx.page/status/{org}/{repo}/main/?editUrl=auto\"\n\nDefine user roles without enforcing authentication\n\nBy default, as soon as the role mapping is defined via an admin/role/* entry, authentication is enforced on that project. It might be desirable to allow unauthenticated access but still be able to define a user mapping, for example give a user the admin role.\n\nThe requireAuth property can be used for this with the following values:\n\nauto is the default and enforces authentication as soon as a role mapping is defined.\ntrue will always enforce authentication\nfalse will not enforce authentication\nExample:\n\nGive the user bob@example.com the admin role but don’t enforce authentication:\n\ncurl -X POST https://admin.hlx.page/config/acme/sites/website/access/admin.json \\\n  -H \"content-type: application/json\" \\\n  -H \"x-auth-token: <your-auth-token>\" \\\n  --data '{\n\t\"role\": {\n         \"admin\": [ \"bob@example.com\" ]\n\t},\n       \"requireAuth\": false\n}'\n\nDefault Roles\n\nIf no role mapping is configured, the admin uses default roles to determine the permissions of the request. The default role can be specified using the admin/defaultRole property.\n\nBy default there are no default roles, unless requireAuth is auto, in this case the default role is basic_publish (see below). Example:\n\nUse publish as default role:\n\ncurl -X POST https://admin.hlx.page/config/acme/sites/website/access/admin.json \\\n  -H \"content-type: application/json\" \\\n  -H \"x-auth-token: <your-auth-token>\" \\\n  --data '{\n       \"defaultRole\": \"publish\"\n}'\n\nEvaluation\n\nThe effective roles of a request are evaluated as follows:\n\nIf a request isn’t authenticated and requireAuth is true, a 401 status code is returned.\nIf a request isn’t authenticated and requireAuth is auto and role mapping is defined, a 401 status code is returned.\nIf a request isn’t authenticated the defaultRole is used.\nIf a request is authenticated and no role mapping is defined, or if requireAuth is false, the defaultRole is used.\nIf a request is authenticated and role mapping is defined and requireAuth is not false, the roles that match the user are used.\nIf no mapping matches, the user will have no role; effectively always returning a 403 status code.\nIf several mapping matches, the user will have a combined set of roles\nAdmin Permissions\nPermission\t Purpose \n cache:write\t Purge cache \n code:read\t Read code status \n code:write\t Update code \n code:delete\t Delete code \n code:delete-forced\t Delete code (forced) \n config:read\t Read all config \n config:read-redacted\t Read redacted config \n config:write\t Update config \n index:read\t Read index matching \n index:write\t Reindex \n preview:read\t Read preview information \n preview:write\t Update preview \n preview:delete\t Delete preview resources \n preview:delete-forced\t Delete preview resources (forced) \n preview:list\t List preview resources \n edit:read\t Read edit status \n edit:list\t List edit resources \n live:read\t Read live status \n live:write\t Update live resources (publish) \n live:delete\t Delete live resources \n live:delete-forced\t Delete live resources (forced) \n live:list\t List live resources \n cron:read\t Read cron job config \n cron:write\t Update cron job config \n snapshot:read\t Read snapshots \n snapshot:write\t Update snapshots \n snapshot:delete\t Delete snapshots \n job:read\t Read job information \n job:write\t Start jobs \n job:list\t List jobs \n log:read\t Read logs \n log:write\t Write logs (append)\nAdmin Roles\nRole\t Permissions \n admin\t\n<all permissions>\n\n basic_author\t\ncache:write\ncode:read\ncode:write\ncode:delete\nindex:read\nindex:write\npreview:read\npreview:write\npreview:delete\nedit:read\nlive:read\ncron:read\ncron:write\nsnapshot:read\njob:read\n\n basic_publish\t\n<basic_author>\nlive:write\nlive:delete\n\n author\t\n<basic_author>\nedit:list\njob:list\nlog:read\npreview:list\npreview:delete-forced\nsnapshot:delete\nsnapshot:write\njob:write\n\n publish\t\n<author>\nlive:write\nLive:delete\nlive:delete-forced\nlive:list\n\n develop\t\n<author>\ncode:write\ncode:delete\ncode:delete-forced\n\n config\t\nconfig:read-redacted\n\n config_admin\t\n<publish>\nconfig:read\nconfig:write\nFurther Reading\n\nIf you have set up the Configuration Service for your site, then you can use the following API operations to configure permissions for authors:\n\nCreate Site Configuration (applies to a single site)\nUpdate Site Configuration (applies to a single site)\nCreate Profile Configuration (applies to multiple sites)\nUpdate Profile Configuration (applies to multiple sites)\n\nIn each of these API requests, you will use the fields\n\ngroups (with the GroupsConfig schema) to define groups and assign group members\naccess.admin (with the AdminAccessConfig schema) to assign access permissions to groups or users","lastModified":"1770734030","labs":""},{"path":"/developer/block-party/thank-you","title":"Thank you for your submission.","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"We will review your submission and add it to the Block Party list if it qualifies.","content":"Thank you for your submission.\n\nWe will review your submission and add it to the Block Party list if it qualifies.","lastModified":"1725864574","labs":""},{"path":"/developer","title":"AEM for Developers","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Frictionless experience management: Build blazing fast websites using tools content creators and developers already know.","content":"Develop with Adobe Experience Manager\nCreate a website,\nseriously fast.\n\nAEM integrates the tech you already own, know, and love. Your content, your tech, your audience taken to the next level.\n\nJoin our Discord and get started\n\nUse the technology you already have\n\nMicrosoft Word\n\nMicrosoft Excel\n\nMicrosoft Sharepoint\n\nFastly\n\nGoogle Docs\n\nGoogle Sheets\n\nGoogle Drive\n\nGithub\n\nCloudflare\n\nAkamai\n\nAWS Cloudfront\n\nSlack\n\nNo time for slow sites\n\nDevelop Faster\n\nEverything modern web development needs, and not a single bit more\nBusiness Owners\nWith its lightweight approach to web development, AEM allows you to go live in weeks, onboard new developers in days, and iterate on changes in minutes. Unlimited preview sites give you confidence that changes will be what you are looking for.\nOperations\nAEM is one less thing to be on call for. Incredible availability due to fully redundant architecture, endless scalability as a full software as a service and easy integrations with your existing CDN make go-live anxiety a thing of the past.\nDevelopers\nAEM uses all the things modern web developers love: GitHub, local development with auto-reload, performance, simplicity – and none of the complications: no transpilation, no bundlers, no configurations, no overhead.\nQuality Engineers\nAutomated assurance of performance, accessibility, SEO, and best practices thanks to built-in quality checks. Continuous assurance of real-world site performance and functionality through real user monitoring.\nGet started now!\n\nTwo ways to get started\n\nGet started on your own with the tutorial\n\nStart Tutorial\n\nJoin Discord to get started\n\nJoin Discord\n\nstyle\npb-0","lastModified":"1725864574","labs":""},{"path":"/docs/dev-collab-and-good-practices","title":"Development Collaboration and Good Practices","image":"/docs/media_14b00b877f0e91728c42d63fc1c5d0f28e3e34c71.png?width=1200&format=pjpg&optimize=medium","description":"Working with a large number of development teams across many projects and organizations, we found that it is useful to collect some of our insights. ...","content":"style\ncontent\n\nDevelopment Collaboration and Good Practices\n\nWorking with a large number of development teams across many projects and organizations, we found that it is useful to collect some of our insights. Some of those are related to AEM, but the majority are related to general purpose frontend development or are just general guidelines on how to collaborate in a team of developers.\n\nYou may read some of those items and think that it is generally understood as common sense amongst developers. We agree, and that’s a great sign that you are ready to work in a collaborative way on AEM projects together with other developers.\n\nAt this point this is just a collection of lessons learned from our engagements on a growing set of projects.\n\nGitHub\nKeep the repository public\n\nGiven that all project code is sent to the client-side and therefore accessible in everyone's browser, we recommend keeping the repository as public. Adobe has a longstanding history in the open-source community and generally advocates for code to be public unless there is a compelling reason to make it private.\n\nMaintaining a public repository offers several benefits of open source projects like community code reviews, and developers are able to share knowledge and code, in turn fostering innovation and collaboration. Public repositories also make it quick and easy for Adobe and other developers to create free pull requests to help improve your project and automatic PSI checks (on the feature branches) are provided by Adobe, which are not enabled by default for private repositories.\n\nIf you still have a strong need to keep your repository private, be aware that certain features are only available in paid plans for private repositories, such as branch protection rules and limited CI/CD minutes. For more details, please see GitHub’s pricing page.\n\nCreate pull requests\n\nIf you work on a project with multiple developers it is rarely a good idea to push directly to main. When your project is in production code changes that are merged or pushed to main often means that they are released to production. Protecting your main branch is a good mechanism to ensure that people don’t push to main by accident, which is especially advisable with a site that is in production.\n\nPull request etiquette\n\nIf you open a pull request, make sure that you include a URL to a page (or a number of links to pages) on your branch, where the reviewer can see your code in action. If you are updating code of an existing block, make sure to include the link that features the block you are updating as the reviewer may not know where this block is in use to test its functionality.\n\nKeep the scope of your Pull Request to what’s in the title/description of the PR.\n\nFor work still in progress, opening a Draft Pull Request helps prevent wasting people's time with reviewing code still in flux as well as accidental merging.\n\nLinting\n\nThe standard boilerplate setup runs the linting tools eslint (airbnb-base) and stylelint for each change in a Pull Request. Do not submit a Pull Request with linting errors for review.\n\nChanging the linting configuration is not recommended unless you have a really good reason. Personal preference is not a good reason. Changing linting rules makes it difficult to reuse code from the AEM boilerplate, block collection or other open source AEM projects. Arguing if something is a good reason to change the linting rules is usually a lot more effort than just categorically saying no.\n\nPageSpeed Insights\n\nThe AEM Code Sync GitHub app runs Google PageSpeed Insights for each change in a pull request to assess the impact on Web Performance, Accessibility, SEO and Best Practices. Do not submit a pull request for review that doesn’t have a green Lighthouse Score for mobile and desktop, ideally 100 for both.\n\nReviews\n\nIt is good practice to have your code reviewed by the maintainer or main developer of the project you are working on. This can be encouraged by setting up branch protection on main to require pull requests with at least 1 approval before any code can be merged. You can still allow project administrators to bypass branch protection settings for emergencies.\n\nShared branches\n\nIt is good practice to consider branches for individual Pull Requests as private to the developer who created the branch. Do not just push into other developers’ branches without having been invited to do so. There are situations where people are collaborating on Pull Requests but it should be an explicit agreement.\n\nMerging pull requests (deploying to production)\n\nMerge your own pull requests only. If the person who opened the Pull Request has the ability to merge their own Pull Request, the author of the code is the ideal person to merge. There are situations where the author specifically states that this can and should be merged by someone else, and in those cases a maintainer (main developer) of the project should feel free to merge a Pull Request.\n\nEven if a Pull Request is approved, you should always check with the author of the Pull Request if they are ready to merge.\n\nThe AEM Code Sync GitHub app will automatically deploy changes merged into the main branch to production.\n\n(Scaled) Trunk-based development\n\nFor AEM projects with their built-in devops and CI/CD, we recommend following the scaled trunk-based development model. This means you merge small pull requests into production often, but the quality assurance & review efforts are limited to small change sets. Nobody wants to review and test large pull requests, and long-lived branches with lots of changes tend to be difficult (and dangerous) to merge.\n\nDependency management\n\nMake sure your dependencies are kept up to date. Even if you don't add more dependencies, it is good practice to keep the minimal out of the box dependencies (linting) up to date. Feel free to use tools like Renovate.\n\nCSS\nCSS selector block isolation\n\nAEM blocks most often operate as components collaboratively in the same DOM / CSSOM. This means that you should write your CSS selectors in a block .css in a way that isolates your CSS from impacting layout of elements outside your block. The easiest way to do this is by making sure that every CSS selector in the .css of a block only applies to the block.\n\nCascade in CSS\n\nUse your CSS classnames wisely. Some CSS classes and variables are used across different blocks, and others are not expected to be used outside your block. Prefixing classes and variables that are private to your block with the block name is good practice. Conversely, if there are CSS classes and CSS context that should be inherited (often those can be authored) those classes and variables should not be prefixed.\n\nCSS indentation and property order\n\nOutside of a CSS refactoring PR, don’t change sequencing on properties or the indentation across the CSS files you touch in a functional PR. Every developer has different preferences on sort order of properties or indentation. Make sure the diff that you see in your PR is isolated to the changes you actually want reviewed before submitting it.\n\nCSS selectors complexity\n\nDon’t let your CSS Selectors become complex and unreadable. Often it is better to decorate additional CSS classes/Elements onto your DOM and write readable CSS instead. Complex CSS selectors also often are harder to maintain and more brittle than the equivalents in JS.\n\nCSS naming\n\nNaming your classes simple and intuitive is helpful for other developers. Avoid namespacing unless it is necessary within the scope of a project. There is often no need to specify the type or the origin (e.g. the name of your design system) of a CSS variable that is to be used across the entire project.\n\nThe !important rule\n\n!important is reserved for very specific isolated cases. Since in website projects we often control the entire CSS context of a page (or at least the vast majority), it is very unlikely that there is a need to throw the !important hand grenade.\n\nLeverage ARIA attributes for styling\n\nIn many situations you will add ARIA Attributes for accessibility. Since those have well defined semantics like expanded or hidden that are understood by all developers, in most cases there is really no need to come up with additional classes in your vocabulary that have unknown semantics.\n\nMobile first\n\nGenerally Web projects should be developed “Mobile First”. This means that your CSS without media query should render a mobile site. Add media queries to extend the layout for tablet and desktop.\n\nBreakpoints\n\nGenerally use 600px, 900px and 1200px as your breakpoints, all min-width. Don’t mix min-width and max-width. Only use different breakpoints in exceptional cases where you have good understanding why that’s needed.\n\nLess, Sass, PostCSS, Tailwind and friends.\n\nIf you are working in the context of a bigger organization, make sure that you don’t introduce a dependency to any CSS preprocessor or framework of your personal preference without getting the buy-in from the entire team and organization. As there are a lot of differing personal preferences in this area, it makes code hard to maintain if every project or every block inside a project uses different technologies.\nThe simplest solution is to rely on the growing CSS feature set which is supported by the browsers.\n\nModern CSS features\n\nMake sure the features you are using are well supported by evergreen browsers. Depending on the features more or less pervasive support may be acceptable.\n\nJavaScript\nFrameworks\n\nOn most web sites, frameworks are overkill for simple layout problems outside of application-like functionality. Frameworks often introduce web performance issues (Lighthouse and Core Web Vitals), particularly if they are in the pathway of the LCP or introduce TBT, while trying to address trivial problems. Keep simple things simple.\nIf you are using Javascript Frameworks make sure that you don’t introduce a dependency to any JS Framework or library of your personal preference without getting the buy-in from the entire team and organization. As there are a lot of personal preferences, it makes code hard to maintain if every project or every block inside a project uses different technologies.\nThe simplest solution is to rely on the growing feature set supported by browsers.\n\nBuild tool chain\n\nDiffering build tool chains from project to project make it hard for new developers to be onboarded and often introduce additional complexity. Make sure that you don’t introduce a dependency of your personal preference without getting the buy-in from the entire team and organization.\nThe simplest solution is to keep the entire project build-less.\n\nModern JavaScript features\n\nMake sure the features you are using are well supported by evergreen browsers. Depending on the features more or less pervasive support may be acceptable. While AEM can be used with any browser, aem.js has a dependency on browsers that support dynamic import(). Any feature that is supported by the set of browsers that support dynamic import() should be considered safe. Technically, of course, older browsers (e.g., IE) can be supported by AEM projects, but those require heavy customization.\n\nNot all features have the same consequences if a browser doesn’t support them, some may be cosmetic and others may stop the site from working. A common example is “optional chaining”. If a browser doesn’t support “optional chaining”, a single usage can have fatal consequences for the entire page.\n\nLoading 3rd party libraries\n\nDon’t add 3rd party libraries to your <head> via head.html as they will be in the critical path of loading content above the fold and will often be loaded when they are not needed. Add the dependencies where needed via loadScript() to the specific block that has the corresponding requirement.\nIn case of larger 3rd party libraries, you may even want to consider using an IntersectionObserver to make sure you only load them when the block depending on it is actually being scrolled into view.\n\nAEM library (aem.js)\n\nThe AEM library (usually called aem.js in a boilerplate project) is currently not minified and obfuscated to make debugging easier. We discourage making changes to it on a project basis and instead recommend project specific extensions to be kept outside of the library. We welcome Pull Requests via GitHub, if you would like to propose changes or bug fixes that are universally applicable.\n\n<head>\n\nThe <head> that is delivered from the server as part of the HTML markup should remain minimal and free of marketing technology like Adobe Web SDK, Google Tag Manager or other 3rd party scripts due to performance impacts. Adding inline scripts or styles to <head> is also not advisable for performance and code management reasons.\n\nMinification\n\nTypically this is additional complexity without much benefit unless you have really large JS/CSS files which again would be an anti-pattern. With Edge Delivery, the way the code is structured around blocks, the files should be usually small in size and minifying should not make much of a difference. To be sure, you can compare the Lighthouse Score pre/post minification.\nMinification makes code slightly harder to debug and you'd need sourcemaps. Also minifying requires an additional build step which can potentially slow down site development work. So you’d want to go there only if there is a tangible benefit of this additional complexity\n\nContent first\nStart your development with content\n\nBefore writing a line of code, create your (sample) content in a Word or Google Doc (or spreadsheet). Make sure that it feels good for authoring and share it with people on your team that have experience supporting authors. It requires support experience to understand what content structures are easy for authors to understand and recreate. Once you have settled on a content structure that contains all elements you need for your block, and have it reviewed you can get started developing your CSS and JS code.\n\nUse drafts\n\nThe content lifecycle is very different from the code life cycle. If you are proposing changes to an existing content structure in your code or come up with a new block, don’t just make those changes on the page you are working on. Copy the page into your /drafts/<yourname>/ folder and make changes there.\nOnce your code changes are merged to main , you can have authors copy or merge your content with the content outside of your /drafts/ folder.\n\nBackwards compatibility of new features\n\nEspecially in production environments it is important to keep your anticipated changes to the content structure backwards compatible with existing content. Ideally, code that is being merged should not have an impact on the website or require refactoring the content. Only when new content is put in place through a preview and publish cycle, the new functionality becomes available. This of course doesn’t apply to things like design changes across the existing content or functional bug fixes.\n\nUse content for “static” resources\n\nGenerally, it is not a good idea to commit binaries into your GitHub repo. Even text-based static resources, for example HTML files or SVGs should only be put into GitHub in exceptional cases. A good reason to add an SVG to your git repo is if it is referenced from code. Don’t commit anything that is related to the content authoring process, or could be a part of an authoring process. There are some exceptions (usually related to legacy and non-browser clients) that require a certain set of fixed resources that cannot be produced and manipulated dynamically by AEM, but in general if you find a large set of static resources (e.g. images, etc.) or an HTML file in a PR it is most likely not a good practice.\n\nUse content for strings/literals\n\nStrings that are displayed to end users and could possibly be translated or changed at some point should always be authorable and sourced from content (eg. placeholders or other spreadsheets or documents). If you have a literal string that is displayed as text to the visitor of your website via javascript or css code, it is good practice to replace it with a reference to content.","lastModified":"1770917458","labs":""},{"path":"/developer/placeholders","title":"Using Placeholders","image":"/developer/media_1924a42826eff0f60ff46c462d9fe3749e6a7bb66.png?width=1200&format=pjpg&optimize=medium","description":"In most websites, there are strings or variables that will be used throughout the site. Especially in sites that need to support multiple languages, it ...","content":"style\ncontent\n\nUsing Placeholders\n\nIn most websites, there are strings or variables that will be used throughout the site. Especially in sites that need to support multiple languages, it is not a good idea to hard code such values. Instead placeholders can be used and managed centrally.\n\nFor information on how to author placeholders, see the placeholders documentation in the publish section.\n\nNote: the placeholders feature has been moved to the block-collection repository and is not part of the boilerplate.\n\nYou can import fetchPlaceholders in your block’s JS or scripts.js and use it as follows to retrieve placeholder strings. You probably have some function or logic in your project to determine the language of the current page based on its path or metadata. In this example, we’ll simply hardcode it to en. This means it will fetch a placeholders sheet in the en folder. Omitting the argument will assume there’s a placeholders sheet in the root folder.\n\nimport { fetchPlaceholders } from '/scripts/placeholders.js';\n\n// fetch placeholders from the 'en' folder\nconst placeholders = await fetchPlaceholders('en');\n// retrieve the value for key 'foo'\nconst { foo } = placeholders;\n\nKey formatting\n\nKeys which contain spaces or dashes in the placeholder sheet will be camel-cased for easier access in JavaScript:\n\nabout us will become aboutUs\nfoo-bar will become fooBar\nVIP lounge will become VipLounge\n\nYou can use the helper function toCamelCase to convert keys to property names.","lastModified":"1745328651","labs":""},{"path":"/docs/aem-assets-sidekick-plugin","title":"Adobe Experience Manager Assets Sidekick Plugin","image":"/docs/media_17dc5f4102113dfa4f96b6520d011cb9328b2cd47.png?width=1200&format=pjpg&optimize=medium","description":"With the Experience Manager Assets Sidekick plugin, you can use assets from your Experience Manager Assets repository while authoring documents in Microsoft Word or Google ...","content":"style\ncontent\n\nAdobe Experience Manager Assets Sidekick Plugin\n\nWith the Experience Manager Assets Sidekick plugin, you can use assets from your Experience Manager Assets repository while authoring documents in Microsoft Word or Google Docs.\n\nImportant Note: For using assets while authoring a page using Universal Editor based WYSIWYG authoring, please refer to Universal Editor Custom Asset Picker. The documentation here is specifically for document based authoring experience.\n\nSupported Experience Manager Assets Versions\n\nThe Sidekick plugin for Assets supports access to:\n\nAdobe Experience Manager Assets as a Cloud Service\nAdobe Experience Manager Assets Essentials\n\nNote: To simplify access to Experience Manager Assets version 6.5 (Adobe Managed Services or on-premises), you can implement a project-based plugin using the asset selector from version 6.5.\n\nGrant Users Access to Assets\n\nContent authors who need access to assets using the plugin, must be entitled to an Experience Manager Assets environment by assigning them to the respective product profile.\n\nPlease see Assigning AEM Product Profiles for details.\n\nAssets Sidekick Plugin for Content Authors\n\nContent authors can access assets from Experience Manager Assets without having to leave Microsoft Word or Google Docs.\n\nOpen up the Sidekick browser extension and find the “My Assets” button.\nNote that the name of the button may vary depending on your project’s configuration.\nPlease see the document Configuring AEM Assets Sidekick Plugin to configure this plugin.\nThe Asset Selector opens.\nYou must log in to the Asset Selector using your Adobe login credentials for Assets.\nFor more details, please see the section Give users access to assets.\nDepending on how authentication is set up for your project, these credentials might be different from those you use to log in to Microsoft Word / Google Doc.\nSearch for the asset that you need in the Asset Selector.\nYou can use keyword search, filter for specific asset formats, sort results, and use different views (list, grid, gallery, and waterfall).\nThe assets view shows basic properties of the asset including approval status, if set.\nWhen hovering over assets, you will see the info icon, which shows additional asset metadata.\nSelecting an asset copies it to your clipboard.\nYou can then paste the copied asset to your Microsoft Word or Google Doc document.\nAfter using the Sidekick to preview and publish the document, the new asset is displayed as part of your web page\n\nNote: Non-image assets like videos, PDF documents, zip files etc. cannot be copied to Word / Google Docs documents. To use such assets, the reference URL for those assets should be copied instead. For enabling this reference URL-based copying of the asset, please refer to the section Using Asset References for PDF, Zip, etc. when Authoring Content.\n\nDelivering Assets via Dynamic Media with OpenAPI\n\nUsing the Assets Sidekick plugin, you can also include Dynamic Media delivery with OpenAPI. This offers a number of benefits.\n\nAccess to brand-approved assets only (images, videos, pdfs, other formats) from AEM Assets Cloud Services\nGovernance (references vs. copies of the asset), which helps with auto-propagation of asset lifecycle events like expiration, deletion, and updates\nDynamic image renditions and smart crop\nRich media optimization and delivery (e.g., adaptive video streaming OOTB, and original asset delivery for PDFs)\nAsset-level impressions report (Coming Soon)\n\nFor more information on capabilities offered by Dynamic Media with OpenAPI, please see the Dynamic Media with OpenAPI capabilities documentation.\n\nPrerequisites\n\nTo use asset references you must have:\n\nAn Assets Cloud Service environment where Dynamic Media with Open API is enabled.\nA Dynamic Media license.\nThe AEM Assets sidekick plugin enabled with copy reference for image assets enabled as documented here.\nAssets that are approved, i.e. dam:status=\"Approved\" via the Assets Cloud Services backend or UI actions.\n\nDynamic Media with OpenAPI is now in an Early Adopter program. Please reach out to your account team or Adobe support Slack channel for more information.\n\nUsing Image References when Authoring Content\n\nIf your project meets the prerequisites, you can use image assets by copying their reference URL.\n\nOpen the Sidekick and the Assets Selector and choose a repository with the prefix “Delivery-” in the repository switcher and filter for image assets.\n\nSelect an image asset.\nA single click on the image asset card copies the details to clipboard.\nThis is confirmed via a Copied banner.\n\nPaste the link into your source document.\nThe link points to the original rendition delivery URL of the image.\n\nPreview the site and the link is rendered as an image.\n\n\nIf all looks good, publish the document.\n\nAdditional image transformations like, crop, rotate, flip, etc. are available using Dynamic Media with OpenAPI by appending query parameters at the end of the URL copied from the AEM Asset Selector. Available query parameters are detailed in the Assets Delivery API documentation.\n\nUsing Video References when Authoring Content\n\n\nIf your project meets the prerequisites, you can use video assets by copying their reference URL.\n\nOpen the Sidekick and the Assets Selector and choose a repository with the prefix “Delivery-” in the repository switcher and filter for video assets.\n\nSelect a video asset.\nA single click on the video asset card copies the details to clipboard.\nThis is confirmed via a Copied banner.\nPaste the link into your source document.\nThe link is pasted as an embed block, which includes a hyperlink.\nThe hyperlink is the video player URL for the selected video asset.\n\nPreview the site and the video block and link are rendered as a video.\n\nIf all looks good, publish the document.\nUsing Asset References for PDF, Zip, etc. when Authoring Content\n\n\nIf your project meets the prerequisites, you can use assets for other media types such as PDF, Zip, etc. by copying their reference URL.\n\nOpen the Sidekick and the Assets Selector and choose a repository with the prefix “Delivery-” in the repository switcher and filter for the asset you want to select.\n\nSelect the asset such as a PDF.\nA single click on the asset card copies the details to clipboard.\n\nPaste the link into your source document.\nThe link is pasted as an embed block, which includes a hyperlink.\nThe hyperlink points to the original rendition delivery of the URL of the asset.\n\nPreview the site and verify that the PDF link is rendered.\n\n\nIf all looks good, publish the document. Clicking on the PDF link in the published page should open up the PDF delivered from AEM Asset Dynamic Media Open API delivery in a new tab.\nCustomizing the AEM Assets Sidekick Plugin\n\nThe AEM Assets Sidekick Plugin can be customized to better fit your project’s specific needs. Available customization options include:\n\nCustomizing block structure including block title, number of rows, columns etc. that gets copied over to the document when copying an asset from the assets addon to it.\nControlling if an asset is copied over as binary or a url from where the asset can be delivered\nSet up the default filter schema to show only assets relevant to the page being authored.\nAnd more options like specifying asset’s delivery domain etc.\n\nPlease see the document Configuring Adobe Experience Manager Assets Sidekick Plugin for details on customization and extension options available.","lastModified":"1736615757","labs":"AEM Assets"},{"path":"/docs/setup-sharepoint","title":"How to use Sharepoint","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"","content":"style\ncontent\n\nHow to use Sharepoint\nIf the content is hosted on Adobe’s Sharepoint (i.e. https://adobe.sharepoint.com) please read Setup Adobe Sharepoint\nIf the content is hosted on a non Adobe Sharepoint please read Setup Customer Sharepoint","lastModified":"1725864574","labs":""},{"path":"/docs/setup-adobe-sharepoint","title":"How to use Adobe Sharepoint","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"NOTE: This only applies for projects on Adobe’s Sharepoint (https://adobe.sharepoint.com) . For projects using a non Adobe Sharepoint, please continue here.","content":"style\ncontent\n\nHow to use Adobe Sharepoint\n\nNOTE: This only applies for projects on Adobe’s Sharepoint (https://adobe.sharepoint.com) . For projects using a non Adobe Sharepoint, please continue here.\n\nSetting up sharepoint involves the following steps:\n\nCreate a folder within sharepoint that will be the website root.\nShare the website root folder with that user.\nConfigure the fstab.yaml with the respective folder\n1. Create the website root folder\n\nNavigate to your desired location in sharepoint and create a root folder that will be your website root. It is best to not use a sharepoint list root directly, so that you have a shared space for your authors to put collateral documents, for example a drafts folder, or how-to-author documentations.\n\nAn example file structure might look like this, using the website folder as the root:\n\n3. Share the website root folder\n\nEnsure that the helix@adobe.com user has edit rights on the website root folder. This can be achieved easily by clicking on the … ellipsis menu and selecting “Manage Access”.\n\nAnd then add the user via the “Direct access” option.\n\n4. Configure the fstab.yaml\n\nThe next step is to configure the mountpoint in the fstab.yaml to point to the website root. It usually has the form of\n\nhttps://<tenant>.sharepoint.com/sites/<sp-site>/Shared%20Documents/website\n\n\nBut this might vary depending on how you create the sharepoint site and lists. In order to obtain the url, the simplest way is to copy-past the first part from the browser address, eg:\n\n\n\nAnd then add the rest manually (Note, that copying the sharelink via the UI adds unnecessary information and it is better to use a canonical representation of the url). Once you composed the url, you can test it by entering it again in the browser. You should end up in the folder view of your website root.\n\nAfter that, update the fstab.yaml accordingly.\n\nFor example:\n\nmountpoints:\n  /: https://adobeenterprisesupportaem.sharepoint.com/sites/hlx-test-project/Shared%20Documents/website\n\n\nTo finalize the configuration, commit the fstab.yaml back to the main branch.","lastModified":"1725864574","labs":""},{"path":"/docs/sidekick-library","title":"What is the Sidekick Library?","image":"/docs/media_1e5fb8dc7e6733da52dfc650a8634e9edf483eee5.jpg?width=1200&format=pjpg&optimize=medium","description":"The Sidekick Library is an extension for the AEM Sidekick that enables developers to create UI-driven tooling for content authors. It includes a built-in blocks ...","content":"style\ncontent\n\nWhat is the Sidekick Library?\n\nThe Sidekick Library is an extension for the AEM Sidekick that enables developers to create UI-driven tooling for content authors. It includes a built-in blocks plugin that can display a list of all blocks to authors in an intuitive manner, removing the need for authors to remember or search for every variation of a block. Developers can also write their own plugins for the sidekick library.\n\nHow to use the Sidekick Library?\n\nThe steps below detail how to setup the sidekick library and configure the blocks plugin.\n\nLibrary Sheet Setup\n\nThe sidekick library is populated with your plugins and plugin content using a sheet.\n\n1. Start by creating a directory in sharepoint or gdrive where you want to store the content for the library. We recommend storing the content in /tools/sidekick (or any other name) in the root of the mountpoint. The next steps will assume the directory is called /tools/sidekick.\n\n2. Next, create a workbook (an Excel file) in the /tools/sidekick directory called library (or any other name). Each sheet in the workbook represents a plugin that will be loaded by the Sidekick Library. The name of the sheet determines the name of the plugin that will be loaded. Any data contained in the sheet will be passed to the plugin when loaded. The plugin sheet name must be prepended with helix-. For example, if you want to load a plugin called tags, you would create a sheet named helix-tags.\n\n3. For this tutorial we will create a sheet for our blocks plugin. Create a sheet (or rename the default sheet) and call it `helix-blocks` and leave it empty for now.\n\nBlocks Plugin\n\nThe Sidekick library comes with a blocks plugin.\n\nhttps://www.aem.live/docs/sidekick-library.mp4\nBlocks Plugin Setup\n\nTo generate content for the blocks plugin, you need to prepare a separate Word document for each block you want to include.\n\n1. Create a directory inside the /tools/sidekick directory where you will store all the block variations. For example, you could create a directory called blocks inside /tools/sidekick.\n2. For this example, let's assume we want to define all the variations of a block called columns. First create a Word document called columns inside the blocks directory and provide examples of all the variations of the columns block. After each variation of the block add in a section delimiter.\n3. Preview and publish the columns document.\n4. Open the library workbook created in the last section, inside the helix-blocks sheet, create two columns named name and path.\n6. Next we need to add a row for our columns block. Add the name of the block in the first column and the url to the document that defines the block variations in the second column. For instance, if you want to add the columns block, you could create a row with the name Columns and the path /tools/sidekick/blocks/columns. In order for the library to work across environments (page, live, prod) you should not use an absolute url for the path column.\n7. Preview and publish the library workbook.\n\n> Since the example blocks are being published you should use bulk metadata to exclude the content inside of /tools/** from being indexed.\n\nExample library.xlsx\n\nLibrary Metadata\n\nThe blocks plugins supports a special type of block called library metadata which provides a way for developers to tell the blocks plugin some information about the block or how it should be rendered.\n\nSupported library metadata options\nKey Name\t Value\t Description\t Required \n name\t Name of the block\t Allows you to set a custom name for the block\t false \n description\t A description of the block\t Allows you to set a custom description for a block\t false \n type\t The type of the block\t This tells the blocks plugin how to group the content that makes up your block. Possible options are template or section (details below)\t false \n include next sections\t How many sections to include in the block item\t Use if your block requires content from subsequence sections in order to render. Should be a number value that indicates how much subsequent sections to include.\t false \n searchtags\t A comma seperated list of search terms\t Allows you to define other terms that could be used when searching for this block in the blocks plugin\t false \n tableHeaderBackgroundColor\t A hex color (ex #ff3300)\t Overrides the table header background color for any blocks in the section or page.\t false \n tableHeaderForegroundColor\t A hex color (ex #ffffff)\t Overrides the table header foreground color for any blocks in the section or page.\t false \n contentEditable\t A boolean value (default: true)\t Set to false to disable content editing in the preview window.\t false \n disableCopy\t A boolean value (default: false)\t Set to true to disable the copy button in the preview window.\t false \n hideDetailsView\t A boolean value (default: false)\t Hide the block details panel inside the preview window.\t false\nDefault Library metadata vs Library metadata\n\nThere are two types of library metadata. Library metadata that lives within a section containing the block, or default library metadata that applies to the document as a whole and lives in a section on it's own (a block called library metadata as the only child in a section).\n\nLet's take an example of a hero block that has 5 variants. Suppose you want to add the same description for each variation of the block, rather than duplicating the library metadata with the description into each section containing the variations. You could instead use default library metadata to apply the same description to every variation of the block. If you decide that one variation actually needs a slightly different description you could add library metadata to the section containing the variation and it would override the default library metadata description when it's rendered within the blocks plugin.\n\nAuthoring block names and descriptions using Library Metadata\n\nBy default the block name (with variation) will be used to render the item in the blocks plugin. For example, if the name of the block is columns (center, background) then that name will be used as the label when it’s rendered in the blocks plugin. This can be customized by creating a library metadata section within the same section as the block. Library metadata can also be used to author a description of the block as well as adding searchTags to include an alias for the block when using the search feature.\n\nExample block with custom name and description\n\nContent\n\nDisplay\n\nAutoblocks and Default Content\n\nThe blocks plugin is capable of rendering default content and autoblocks. In order to achieve this, it is necessary to place your default content or autoblock within a dedicated section, which should include a library metadata table defining a name property, as previously described. If no name is specified in the library metadata, the item will be labeled as \"Unnamed Item.\"\n\nBlocks composed of content in subsequent sections\n\nThere are situations where developers may want a block to consist of content from subsequent sections. This pattern is discouraged for reasons stated here, but if you choose to use it the blocks plugin can render these items using the include next sections property in library metadata.\n\nTemplates\n\nTemplates are a way to group an entire document into a single element in the sidekick library. To mark a document as a template set type to template in default library metadata.\n\n> Important, the library metadata needs to be in it's own section and be the only child to be considered default library metadata.\n\nSupporting metadata is also desirable for templates. To add a metadata table to the template you can use a Page metadata block.\n\nWhen the template is copied a metadata with the values will be added along with the content to the clipboard.\n\nSidekick plugin setup\n\nSince the sidekick library is hosted on the same origin as the content, a static HTML page needs to be created to load and configure the content.\n\n1. Create a file called library.html in tools/sidekick/;\n\n2. Paste the following code in library.html.\n\n<!DOCTYPE html>\n<html lang=\"en\">\n  <head>\n    <meta charset=\"utf-8\" />\n    <meta\n      name=\"viewport\"\n      content=\"width=device-width, initial-scale=1.0, viewport-fit=cover\"\n    />\n    <meta name=\"Description\" content=\"AEM Sidekick Library\" />\n    <meta name=\"robots\" content=\"noindex\" />\n    <base href=\"/\" />\n\n    <style>\n      html,\n      body {\n        margin: 0;\n        padding: 0;\n        font-family: sans-serif;\n        background-color: #ededed;\n        height: 100%;\n      }\n      \n      helix-sidekick { display: none }\n    </style>\n    <title>Sidekick Library</title>\n  </head>\n\n  <body>\n    <script\n      type=\"module\"\n      src=\"https://www.aem.live/tools/sidekick/library/index.js\"\n    ></script>\n    <script>\n      const library = document.createElement('sidekick-library')\n      library.config = {\n        base: '/tools/sidekick/library.json',\n      }\n\n      document.body.prepend(library)\n    </script>\n  </body>\n</html>\n\n\nIn the code above we load the sidekick library from aem.live and then create a custom sidekick-library element and add it to the page. The sidekick-library element accepts a config object that is required to configure the sidekick library.\n\nSupported configuration parameters\nParameter Name\t Value\t Description\t Required \n base\t Path to the library\t The base library to be loaded\t true \n extends\t Absolute URL to the extended library\t A library to extend the base library with\t false \n plugins\t An object containing plugins to register with the sidekick library.\t The plugins object can be used to register plugins and configure data that should be passed to the plugin\t false\n\nThe blocks plugin supports the following configuration properties that can be set using the plugins object.\n\nBlocks plugin configuration parameters\nParameter Name\t Value\t Description\t Required \n encodeImages\t A boolean value that indicates if images should be encoded during copy operations\t If your site has a Zero trust network access (ZTNA) service enabled such as Cloudflare Access then images should be encoded for copy/paste operations to work correctly with images.\t true \n viewPorts\t Full or simplified configuration object, see examples below.\t Configuration to overwrite the default viewport sizes. The default is 480px fo mobile, 768px for tablet and 100% of the current window for desktop.\t false \n contentEditable\t A boolean value to disable content editing globally in previews.\t Set to false to disable content editing.\t false\nExamples\n\nEncoding images\n\nconst library = document.createElement('sidekick-library')\nlibrary.config = {\n  base: '/tools/sidekick/library.json',\n  plugins: {\n    blocks: {\n      encodeImages: true,\n    }\n  }\n}\n\n\nCustom viewports (short form)\n\nconst library = document.createElement('sidekick-library')\nlibrary.config = {\n  base: '/tools/sidekick/library.json',\n  plugins: {\n    blocks: {\n      viewPorts: [600, 900],\n    }\n  }\n}\n\n\nCustom viewports\n\nconst library = document.createElement('sidekick-library')\nlibrary.config = {\n  base: '/tools/sidekick/library.json',\n  plugins: {\n    blocks: {\n      viewPorts: [\n        {\n          width: '599px',\n          label: 'Small',\n          icon: 'device-phone',\n        },\n        {\n          width: '899px',\n          label: 'Medium',\n          icon: 'device-tablet',\n        },\n        {\n          width: '100%',\n          label: 'Large',\n          icon: 'device-desktop',\n          default: true,\n        },\n      ],\n    }\n  }\n}\n\nCustom table header colors\n\nYou can customize the table header background and foreground color when pasting a block, section metadata or metadata that was copied from the blocks plugin.\n\nDefault styles can be set in library.html using css variables.\n\n <style>\n    :root {\n      --sk-block-table-background-color: #03A;\n      --sk-block-table-foreground-color: #fff;\n\n      --sk-section-metadata-table-background-color: #f30;\n      --sk-section-metadata-table-foreground-color: #000;\n\n      --sk-metadata-table-background-color: #000;\n      --sk-metadata-table-foreground-color: #fff;\n    }\n  </style>\n\n\nThese values can be overridden using library metadata.\n\n> Depending on the system color scheme selected for the users computer (dark mode), Word may alter the chosen colors in an attempt to improve accessibility.\n\nCustom plugin setup\n\nThe example below defines a tags plugin in the config. The keys of the plugins object must match the name of the plugin, any other properties defined in the plugin object will be available to the plugin via the context argument of the decorate method.\n\nconst library = document.createElement('sidekick-library')\nlibrary.config = {\n  base: '/tools/sidekick/library.json',\n  plugins: {\n    tags: {\n      src: '/tools/sidekick/plugins/tags/tags.js',\n      foo: 'bar'\n    }\n  }\n}\n\nExtended Libraries\n\nIn some cases merging two block libraries may be desirable. When an extended library is defined the sidekick library application will merge the base library and the extended library together into a single library list for authors.\n\nThe example below defines a base library and an extended library (on another origin) that will be merged into the base library.\n\nconst library = document.createElement('sidekick-library')\nlibrary.config = {\n  base: '/tools/sidekick/library.json',\n  extends: 'https://main--repo--owner.hlx.live/tools/sidekick/library.json'\n}\n\n\n> The Access-Control-Allow-Origin headers will need to be set on the library.json and blocks of the extended library in order for them to load in the sidekick library. See Custom HTTP Response Headers for more info.\n\n> Due to same-origin policies enforced by browsers on iframes a preview of an extended block cannot be loaded at this time.\n\nSidekick config setup\n\nNext, in order for the sidekick library to appear in the sidekick we need to add the plugin to the sidekick object in the configuration service.\n\ncurl -X POST https://admin.hlx.page/config/{org}/sites/{site}/sidekick.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: {your-auth-token}' \\\n  --data '{\n    \"project\": \"Example\",\n    \"plugins\": [{\n      \"id\": \"library\",\n      \"title\": \"Library\",\n      \"environments\": [\"edit\"],\n      \"url\": \"/tools/sidekick/library.html\",\n      \"includePaths\": [\"**.docx**\"]\n    }]\n  }'\n\n\nThe url property in the plugin configuration indicates the location from which the sidekick should load the plugin. This should point to the library.html file we previously created.\n\n> If you have already configured the sidekick, be careful not to overwrite the sidekick config with the request above. You should GET the config first and add the plugin to the existing configuration if it exists.\n\nConsiderations when building blocks for the library\n\nThe sidekick library renders blocks by first fetching the plain.html rendition of the the block and then strips it of any other blocks in the content (for example if there are multiple variations of a block in the response). It then requests the same page (without .plain.html) and replaces the main element with the stripped block and loads the entire document into an iframe using the srcdoc attribute.\n\nUse of window.location\n\nSince the block is loaded in an iframe using the srcdoc attribute, the instance of the window.location object used by your sites code will not contain the typical values you would expect to see.\n\nExample window.location object when running in the library\n\n{\n  \"host\": \"\",\n  \"hostname\": \"\",\n  \"href\": \"about:srcdoc\"\n  \"origin\": \"null\"\n  \"pathname\": \"srcdoc\"\n  \"port\": \"\"\n  \"protocol\": \"about:\"\n}\n\n\nFor this reason, if your block requires use of the window.location object we recommend adding the following functions to your scripts.js file and importing them into your function for use.\n\n/**\n * Returns the true origin of the current page in the browser.\n * If the page is running in a iframe with srcdoc, the ancestor origin is returned.\n * @returns {String} The true origin\n */\nexport function getOrigin() {\n  const { location } = window;\n  return location.href === 'about:srcdoc' ? window.parent.location.origin : location.origin;\n}\n\n/**\n * Returns the true of the current page in the browser.mac\n * If the page is running in a iframe with srcdoc,\n * the ancestor origin + the path query param is returned.\n * @returns {String} The href of the current page or the href of the block running in the library\n */\nexport function getHref() {\n  if (window.location.href !== 'about:srcdoc') return window.location.href;\n\n  const { location: parentLocation } = window.parent;\n  const urlParams = new URLSearchParams(parentLocation.search);\n  return `${parentLocation.origin}${urlParams.get('path')}`;\n}\n\nUse of createOptimizedPicture in lib-aem\n\nThe createOptimizedPicture function in lib-aem also uses window.location.href. If you are using this function we recommend moving it into scripts.js and modify it to use the getHref() function above.\n\nChecking for the presence of the sidekick library\n\nSometimes you may want to know if the page or the block is running in the sidekick library. To do this there are a couple of options.\n\n1. Check if window.location.href === 'about:srcdoc'\n\n2. The main element and the block element will contain the sidekick-library class\n\nBuilding a Plugin\n\nDeveloping a plugin is similar to constructing a block in AEM. Once a user tries to load the plugin, the sidekick library will trigger the decorate() method on your plugin. This method receives the container to render the plugin in and any data that is included in the plugins sheet.\n\n/**\n * Called when a user tries to load the plugin\n * @param {HTMLElement} container The container to render the plugin in\n * @param {Object} data The data contained in the plugin sheet\n * @param {String} query If search is active, the current search query\n * @param {Object} context contains any properties set when the plugin was registered\n */\nexport async function decorate(container, data, query, context) {\n  // Render your plugin\n}\n\n\n> The decorate() function must be exported from the plugin.\n\nPlugin default export & search\n\nThe default export from a plugin allows authors to customize the plugin name displayed in the header upon loading, as well as activate the search functionality within the sidekick library.\n\nexport default {\n  title: 'Tags',\n  searchEnabled: true,\n};\n\n\nWhen the searchEnabled property is true, the library header will display a search icon upon loading the plugin. If the user initiates a search by entering a query, the decorate() function of the plugin will be triggered again, this time with the search string passed in the query parameter of the decorate() function.\n\nPlugin web components\n\nPlugin authors can utilize a select set of web components from Spectrum when building a custom plugin.\n\nThe following components from Spectrum are available\n\nComponent\t Documentation Link \n sp-tooltip\t Docs \n sp-toast\t Docs \n sp-textfield\t Docs \n sp-sidenav-item\t Docs \n sp-sidenav\t Docs \n sp-search\t Docs \n sp-progress-circle\t Docs \n sp-picker\t Docs \n sp-menu-item\t Docs \n sp-menu-group\t Docs \n sp-menu-divider\t Docs \n sp-menu\t Docs \n sp-illustrated-message\t Docs \n sp-divider\t Docs \n sp-card\t Docs \n sp-button-group\t Docs \n sp-button\t Docs \n sp-action-button\t Docs \n overlay-trigger\t Docs\n\nThe following icons from Spectrum are also available\n\nComponent\t Documentation Link \n sp-icon-search\t Docs \n sp-icon-file-template\t Docs \n sp-icon-file-code\t Docs \n sp-icon-device-phone\t Docs \n sp-icon-device-tablet\t Docs \n sp-icon-device-desktop\t Docs \n sp-icon-magic-wand\t Docs \n sp-icon-copy\t Docs \n sp-icon-preview\t Docs \n sp-icon-info\t Docs \n sp-icon-view-detail\t Docs \n sp-icon-chevron-right\t Docs \n sp-icon-chevron-left\t Docs\n\nPlugin Events\n\nPlugin authors can dispatch events from their plugin to the parent sidekick library in order to display a loader or to show a toast message.\n\nToast Messages\nimport { PLUGIN_EVENTS } from 'https://www.aem.live/tools/sidekick/library/events/events.js';\n\nexport async function decorate(container, data, query) {\n  // Show a toast message\n  container.dispatchEvent(new CustomEvent(PLUGIN_EVENTS.TOAST,  { detail: { message: 'Toast Shown!', variant: 'positive | negative' } }))\n}\n\nShow and Hide Loader\nimport { PLUGIN_EVENTS } from 'https://www.aem.live/tools/sidekick/library/events/events.js';\n\nexport async function decorate(container, data, query) {\n  // Show loader\n  container.dispatchEvent(new CustomEvent(PLUGIN_EVENTS.SHOW_LOADER))\n  ...\n  // Hide loader\n  container.dispatchEvent(new CustomEvent(PLUGIN_EVENTS.HIDE_LOADER))\n}\n\nExample plugin\n\nTags Plugin\n\nPlugin API Example","lastModified":"1768526072","labs":""},{"path":"/developer/importer","title":"Importing Content","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Learn how to use the AEM importer","content":"style\ncontent\n\nImporting Content\n\nAEM offers the capability to easily import an existing site and convert its content to docx files for document-based authoring, HTML files for Document Authoring, or content packages for AEM authoring with the Universal Editor.\n\nImport Concept\n\nIf you copy the content of a web page by selecting Select All and Copy in the browser and paste the selection inside a Word or a Google Doc document, you will see that both programs can easily convert the copied DOM elements into their own basic document elements. An HTML h1 becomes text styled with Heading 1. Any span, div, or p element becomes a Normal paragraph. An image is inserted as an image. Et cetera.\n\nThe AEM Importer recognizes and understands these same page semantics and allows you to import content into:\n\ndocx files for document-based authoring with Google Drive or SharePoint\nContent packages for AEM authoring with the Universal Editor\nHTML for Document Authoring (DA)\n\nCheck out the document Where to author if you are not familiar with AEM’s authoring options.\n\nThe importer offers the tooling to automate the process for multiple pages of a site to be imported and converted.\n\nYou start from an existing web page (the DOM is the input).\nYou apply a set of transformations (to remove unnecessary elements, reorder or transform some of the elements, perform cleanup, etc.).\nThe importer creates a docx document, a content package, or HTML for you.\nGetting Started\n\nWhile the content import is one of the first steps to be performed in a project, you need to be familiar with AEM, especially the desired structure of your content.\n\nIt is good practice to make example imports of each page type you wish to import. You can thereby confirm that the authors find the resulting structures intuitive. This also allows parallelization of the content import with the development work for blocks and styling. If you are using document-based authoring, you can do this by creating manual imports using copy/paste from the source website into a Word or Google Docs document.\n\nStructuring your content in an intuitive manner is an important step in an AEM project. See the documents Markup, Sections, Blocks and Auto Blocking and David’s Model, Second take for more information.\n\nA project is usually ready for an import roughly at the end of the tutorial. See the following documents for more information.\n\nGetting Started – Universal Editor Developer Tutorial\nGetting Started with AEM - Developer Tutorial\nHow to use Sharepoint (application)\nGetting Started – Document Authoring (DA) Developer Tutorial\n\nAt the end of the tutorials, you can simply run the following command.\n\naem import\n\nThis starts the AEM Importer. You can run this instead of or in addition to aem up.\nThe helix-importer-ui project will be cloned under the tools/importer/helix-importer-ui folder.\nThe import proxy server starts and a new browser window opens at http://localhost:3001/tools/importer/helix-importer-ui/index.html\nAs a first step, you must choose your authoring method so the importer UI presents the appropriate options. In the Authoring Experience Selection modal select one of the following and click Ok.\nDocument Authoring if you will be using document-based authoring or Document Authoring (DA)\nAEM Authoring if you are using AEM and the Universal Editor to edit your content\n\n\nSelect your authoring experience when first starting the importer\n\nIf you make the wrong selection for your authoring type or otherwise need to change it, click the project picker drop-down in the bottom-right corner of the importer window and select the appropriate option.\n\nDocument Authoring\nAEM Authoring\nReset to be prompted with the Authoring Experience Selection modal again to make your selection\n\nThe bottom-left of the importer window displays your current version of the importer UI as well as the AEM CLI tool.\n\nThe AEM Importer\n\nThe AEM Importer offers a set of tools to support you to quickly import your website content.\n\nPlease note that only Chrome-based browsers are supported.\n\nImport - Workbench\n\nThe Import Workbench is where you define and start your import process. The UI differs depending on the authoring method you initially selected.\n\nDocument Authoring\n\nPerforming an initial import for document-based authoring is quite simple.\n\nProvide a URL (https://wknd.site/us/en.html in our sample case) of a page to import.\nDefine your desired output.\nBy default, under Import Options, Save as docx is selected, which is required for document-based authoring.\nSelect Save HTML for Document Authoring, if you are using DA.\nIf you select multiple options, subdirectories are created in your selected destination for the various formats.\nIf you only select one option, no subdirectory is created.\nClick the Import button.\nThe importer triggers the browser to ask you where you want to save the resulting docx file (for document-based authoring) or HTML (for Document Authoring) and for confirmation that the browser is allowed to read and write to that location.\n\nOnce you confirm this, the import is performed and saved to the selected location in subdirectories per output method. A green banner at the bottom of the importer window confirms a successful import. A red banner reports any errors.\n\n\nThe workbench for document-based authoring\n\nIf you use document-based authoring, it does not matter if you work with Word documents on Sharepoint or Google Docs on Drive. The output of the AEM Importer is always docx file(s). Google Drive has an option to automatically convert the docx files (Settings > General > Convert uploads). When you upload the files to your drive, they will be converted automatically.\n\nCheck the \"Convert uploaded files to Google Docs editor format\" checkbox\n\nAEM Authoring\n\nPerforming an initial import for AEM authoring using the Universal Editor is quite simple.\n\nProvide a URL (https://wknd.site/us/en.html in our sample case) of a page to import.\nDefine your desired output.\nUnder Import Options, select Save as JCR package, if you are using AEM authoring.\nIf you select another option like Save raw HTML or Save as markdown, subdirectories are created in your selected destination for the various formats.\nIf you only select one option, no subdirectory is created.\nDefine your import paths.\nContent Import Path defines where in your destination repository the content will be stored. This must be under /content.\nAsset Import Path defines wherein your destination repository assets will be stored. This must be under /content/dam.\nClick the Import button.\nThe importer triggers the browser to ask you where you want to save the resulting JCR file and for confirmation that the browser is allowed to read and write to that location.\n\nOnce you confirm this, the import is performed and saved to the selected location in subdirectories per output method. A green banner at the bottom of the importer window confirms a successful import. A red banner reports any errors.\n\nThe workbench for AEM authoring\n\nWhere are my assets?\n\nThe importer creates binaryless JCR packages. I.e. they contain content only without any binaries. If you choose to import your content as a JCR package, your content is mapped within the package as per the Content Import Path you specified. The assets are also mapped as per the Asset Import Path, but are proxied via your local importer.\n\nThe import process also generates an asset-mapping.json file alongside your JCR package, which maps the actual assets to the proxied paths. Due to authorization limitations, these assets could not be downloaded directly as part of the import. However you can use the asset-mapping.json file and the AEM Import Helper app to download and import your assets\n\nImport - Workbench – General Features\n\nThe Page preview frame at the bottom of the middle panel shows you the page you are importing. The page is loaded in an iframe and served via the local proxy server. While AEM tries to remove all security settings, it is possible that the page does not fully render like the original due to CORS issues. This is usually fine in 90% of the cases. In the remaining cases, there are different solutions that can be attempted such as starting your browser with disabled security settings. Please contact the AEM team for assistance in such cases.\n\nThe Preview tab in the right panel shows you an approximation of how the import will appear.\n\nFor document-based authoring, this is an approximation of the resulting Word document.\nFor AEM authoring, this is the JCR content based on the markdown and modeling.\n\n\nThe AEM Importer transforms the HTML into markdown as a first step and then the markdown into a docx file, HTML, or JCR repository depending on your selected authoring method. The Markdown tab shows you the markdown from that intermediate step.\n\nCheck the Markdown source\n\nThe HTML tab shows the result of the first step of the transformation. It is the result of the DOM manipulation. This tab can be occasionally useful but can be disregarded the majority of the time.\n\n\nCheck the HTML source\n\nBy default, the AEM Importer performs a few things for you automatically, like removing the head and cleaning up the HTML.\n\nTo further customize the import process, you can create a tools/importer/import.js file. This file defines all of your own rules to convert your content. If you change and save the import.js file, the import is automatically re-executed. In this way, you can preview your changes while you are developing the transformation rules.\n\nIt is highly recommended that you read the GitHub documentation for the importer to learn more and review code snippets to create your own import.js. Note that any rules you add to your own import.js file are in addition to the default behavior of the importer. Additional options for the import are also documented in detail there.\n\nWhat do I do with these docx files/JCR package/HTML?\n\nWith a set of rules (such as to remove header and footer, reorganize the hero section, create blocks, insert metadata, etc.), you can create an import that contains the essential content of the page and that fits perfectly in a Word document, content package, or HTML depending on your authoring method. You can use this content as the base of your new site by:\n\nImporting the docx into Google Drive or SharePoint or copying and pasting the content to start your site with document-based authoring.\nUsing AEM’s package manager or the AEM Import Helper tool to import the JCR package to start your site with AEM authoring and the Universal Editor.\nUsing Document Authoring’s Browse view, you can drag-and-drop the HTML content into DA.\nImport - Bulk\n\nOnce you are satisfied with the transformation and you have individually tested one or more files, you likely will need to bulk import many more. The Import - Bulk tool works nearly the same as the Import - Workbench tool with a few minor differences.\n\nProvide a list of URLs instead of one. Simply paste the list of URLs to import with one URL per line.\nThe import.js file is not automatically reloaded as it is for one-off imports since if you are in the middle of importing 1000 URLs, you probably do not want the process to restart if you change the code.\n\nOtherwise the options are the same for importing one page or performing a bulk import.\n\nThe amount of URLs you can import varies mainly based on the memory each page consumes. For example, a heavy SPA page usually does not release memory and the browser tends to crash (between 60 and 100 pages). In such situations, if you only need information which is in the markup, you can disable Javascript execution in the options and you will be able to import many more pages.\n\nYou can still batch the set of URLs to import if the number is still manually manageable. If you have a lot of URLs to import (10k+), contact the AEM team. There are several ways to automate the process without using a browser, which you can discuss with them.\n\nReport\n\nDuring the process, you can download an Excel report with the list of pages imported and some process information (import success, 404, 301, etc.). At the end of the process, this report file contains everything the importer has done and can be used for further analysis such as to find pages with errors. Or it can be used for page processing such as previewing and publishing.\n\nBulk import\n\nCrawl\n\nIf you are importing pages from a website and you do not have the full list of URLs to import, you can use the Crawl tool to build the list based on the sitemap or by crawling the site.\n\nGet from robots.txt or sitemap\n\nAfter providing a hostname and clicking Get from robots.txt or sitemap, the tool will first try to find sitemaps in the /robots.txt file. If no robots.txt is found, it will try the /sitemap.xml file (the default filename to search can be changed in the options).\n\nIf it finds a sitemap, it will collect all the URLs referenced in the siteamp and recursively follow the referenced other sitemap files. When the crawl is complete, you can use the download report button for the list of all unique URLs found.\n\n\nSitemaps extracted URLs\n\nYou can use the Filter pathname option to only output the URLs under a certain path. The tool will still need to fetch all the URLs from all the sitemaps.\n\nCrawl\n\nAfter providing a URL and clicking Crawl, the tool will open the provided URL, try to identify the links on the page, and recursively visit all those links that are on the same host. It is basically navigating the site and collecting all the URLs it finds. For a large website, it can take a lot of time. If the website consumes a lot of resources / memory, it may even crash the browser. In such cases, hiding the preview and/or disabling the Javascript in the options can help.\n\n\nCrawling in-progress\n\nYou can use the Filter pathname option to only crawl URLs under a certain path. The provided URLs must then match this filter. This can be really useful to only crawl a subset of a large site.\n\nEyedropper\n\nThe Eyedropper tool allows you to capture the logo and some of the key CSS information of a website. You just need to provide a URL and click Eyedrop.\n\n\nCaptured logo\n\n\nCaptured colors\n\n\nCaptured fonts and sizes\n\nClicking the Copy CSS to clipboard button copies all gathered information in a CSS format that is ready to be pasted into your AEM CSS for further testing and customization.\n\nThe Eydropper does its best to extract the correct information, but you should review the output and adapt it to your project needs.","lastModified":"1751965349","labs":""},{"path":"/docs/auditlog","title":"Audit log","image":"/docs/media_1f0a20136d57db477c73b7633d273c158825a2056.png?width=1200&format=pjpg&optimize=medium","description":"Admin and indexing operations are recorded in an audit log that can be queried via an Admin endpoint.","content":"style\ncontent\nAudit log\n\nAdmin and indexing operations are recorded in an audit log that can be queried via an Admin endpoint.\n\nOnly users who have the role admin.role.author can read the audit logs, see Admin Roles for more information.\n\nThere’s also a handy tool to look at the audit logs.\n\nAdmin operations\n\nThe audit log stores information about the following successful Admin operations:\n\nName\t Description \n preview\t When a page is previewed or removed from preview.. \n live\t When a page is published or unpublished. \n index\t When a page is indexed or removed from the index.. \n cache\t When a page is manually purged from the cache. \n code\t When code is synchronized from GitHub. \n config\t When the site configuration is modified. \n sitemap\t When a sitemap is rebuilt manually. \n job\t When a cache purge job is triggered after code synchronization . \n form\t When a sheet is prepared for data ingestion (deprecated).\n\nThe following information about those operations is stored:\n\nName\t Description \n timestamp\t Epoch when the operation took place. \n duration\t Time in milliseconds the operation took. \n method\t HTTP method used (POST or DELETE) \n route\t Operation that took place (see above). \n path\t Target path of the operation. \n contentBusId\t Internal content bus ID of the project. \n org\t Organization name. \n site\t Site name. \n ref\t GitHub branch or tag affected. \n user\t User that made the change, missing if the request was not authenticated. \n ip\t Originating IP \n search\t Query string of the URL requested.\nIndexing operations\n\nThe audit log stores information about the outcome of background indexing operations, namely:\n\nName\t Description \n timestamp\t Epoch when the operation took place. \n contentBusId\t Internal content bus ID of the project. \n changes\t An array of strings describing what changed in the index. \n errors\t An array of strings containing errors that occurred while indexing. \n unmodified\t Number of index changes that did not change a row in the index.\nExamples\n\nHere are some examples of how common operations triggered via Sidekick will be reflected in the audit log:\n\nOperation\t Sidekick\t Audit log \n Preview\t Preview button in editor\nReload button in preview environment\t\nmethod: POST, route: /preview\n\n\n Publish\t Publish button in preview, live or production environment\t\nmethod: POST, route: /live\n\n\n Unpublish\t Unpublish button in preview, live or production environment\t\nmethod: DELETE, route: /live\n\n\n Delete\t Delete button in preview, live or production environment\t\nmethod: DELETE, route: /preview\nmethod: DELETE, route: /live\n\nRetention policy\n\nEdge Delivery Services audit logs are kept indefinitely, or as long as the customer wants to keep them.","lastModified":"1769440593","labs":""},{"path":"/docs/move-project-to-customer-infrastructure","title":"Migrating a VIP project to your own infrastructure","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"VIP projects typically start out on Adobe’s SharePoint and GitHub. This is a great way to get started and go live quickly. However, eventually you ...","content":"Migrating a VIP project to your own infrastructure\n\nVIP projects typically start out on Adobe’s SharePoint and GitHub. This is a great way to get started and go live quickly. However, eventually you should move your project(s) to your own infrastructure for full ownership and control.\n\nA content move from Adobe’s SharePoint to yours is best combined with a code repository move from Adobe’s GitHub organization to yours. Follow the steps below for a side-by-side migration without downtime:\n\nCode migration\nIf migrating your project code from Adobe’s GitHub\nCreate a new, empty GitHub repository in your GitHub organization. By default, the repository should be public for easier collaboration.\nConfigure the AEM Code Sync GitHub App. for your new repository.\nCopy all files and folders from the repository in Adobe’s GitHub organization to your new one.\nCommit and push the code into the main branch.\nA copy of your site is now available at https://main--project--customer.hlx.page\nCompare with https://main--project--hlxsites.hlx.page to make sure everything works as expected.\nIf your project code is already in your own GitHub\n\nIn this case, create a temporary fork to enable side-by-side migration:\n\nCreate a fork of the code repository in your GitHub organization.\nConfigure the AEM Code Sync GitHub App. for your forked repository.\nA copy of your site is now available at https://main--project-fork--customer.hlx.page\nCompare with https://main--project--customer.hlx.page to make sure everything works as expected.\nContent migration\n\nNote: Skip this step if you are already using your own SharePoint or Google Drive.\n\nPrepare your content root folder:\nSharePoint: Follow this documentation to set up your own SharePoint site, link it to your GitHub repository and register your own SharePoint user.\nGoogle Drive: create a new folder and share it with helix@adobe.com as Editor\nCopy all content from Adobe’s SharePoint site to your own. An easy way to do this is using the OneDrive desktop app.: sync both sites to your local disk, copy the content over and wait until the upload has finished. Don’t forget to copy the .helix folder.\nShare your root folder\nBulk-preview all content using the Admin API. Don’t forget to include the .helix folder in the paths array.\nVerify the previewed content on https://main--project--customer.hlx.page\nBulk-publish the content using the Admin API. You can use the existing site’s sitemap.xml as a reliable source of published URLs.\nVerify the published content on https://main--project--customer.hlx.live\nSidekick setup\nCode and content migration\n\nAuthors will need to add the new project to their sidekick:\n\nPrepare a sharing URL with an “Add project” button for them using the sidekick configurator.\nAlternatively, authors can also navigate to the project’s new SharePoint URL or https://main--project--customer.hlx.page and then click “Add project” in the extension’s context menu or the “+” button in the tool bar..\nYou can optionally prepare a second sharing URL with the old configuration to give your authors an easy way to remove the old project from their sidekick.\nContent migration only\n\nAuthors will need to update the existing project configuration in their sidekick:\n\nPrepare a sharing URL for them using the sidekick configurator. Instruct authors to first click the “Remove project”button, followed by a click on the “Add project” button.\nAlternatively, authors can also select “Options” in the extenstion’s context menu, click “Edit” button on the project, manually enter the new SharePoint URL (same as the mountpoint URL in your fstab.yaml) and click the “Save” button.\nGo-live\nYou migrated your project code from Adobe’s GitHub\n\nTo go live, switch the backend for your production domain in your CDN to the new origin https://main--project--customer.hlx.live.\n\nSee the launch documentation for instructions on how to do this with various CDNs.\nYour project code was already in your own GitHub\nChange the fstab.yaml in your original repository to point to the same mount point URL as in your temporary fork.\nCommit and push the change into the main branch.\nDelete the temporary fork.\nstyle\ncontent","lastModified":"1725864574","labs":""},{"path":"/docs/release-history","title":"Recent Releases","image":"/docs/media_10a92b4b51694917a245f635bdc3697f767f335d3.png?width=1200&format=pjpg&optimize=medium","description":"Please find a current list of recent releases of various components of AEM below. This information is directly pulled from our code repositories and some ...","content":"Recent Releases\n\nPlease find a current list of recent releases of various components of AEM below. This information is directly pulled from our code repositories and some of the information is in public repositories and other aspects are considered internal, beyond the summary release information. If you would like to get more information on a particular release please let us know, so we can consider adding more public detail or release more information to you as a customer, if we consider the information internal.\n\nstyle\ncontent\nhttps://aem-release-feed.david8603.workers.dev/","lastModified":"1768293311","labs":""},{"path":"/docs/china","title":"China FAQ","image":"/docs/media_1d314c26bd463af084680aa51525ee16b785408cb.png?width=1200&format=pjpg&optimize=medium","description":"Serving content with Edge Delivery Services in Adobe Experience Manager in China","content":"style\ncontent\n\nChina FAQ\n\nDelivering content in China requires following the relevant laws and regulations in China. The FAQ is not intended as legal or regulatory advice of the laws and regulations. We encourage you to seek independent legal advice on your legal and regulatory obligations for your operations in China.\n\nCan I serve visitors in China with Edge Delivery Services?\n\nEdge Delivery Services in Adobe Experience Manager requires a customer-managed Content Delivery Network (BYOCDN) to serve content to visitors in China\n\nAdobe Experience Manager Edge Delivery Services is using two redundant global content delivery networks (CDNs) to deliver experiences across the world. However, certain regions require using a local CDN to serve the content within that region.\n\nTo account for this, customers should use customer-managed CDN (BYOCDN) in China to serve visitors from China. CDN operators in China require customers to provide an Internet Content Provider (ICP) license or ICP recordal.\n\nCan content be authored and published from China?\n\nAuthoring content from China is possible with limitations and not fully supported by Adobe.\n\nIn order to use document-based editing from within China, three things must be accessible to the author:\n\nThe document source – note that Google Docs, Sheets, and Drive are not available in China\nThe API for previewing and publishing content on admin.hlx.page – this API is typically available from within China, but often has increased latency\nThe preview URLs on aem.page – this host is available most of the time, but disruptions have been observed\nHow is the developer experience from China?\n\nEdge Delivery Services consume source code from GitHub. Developers collaborating through GitHub from China may experience network latency which is out of Adobe’s control.\n\nIs AEM Hosting available in China?\n\nEdge Delivery Services in Adobe Experience Manager are currently operated outside of China. For customers that require all AEM servers to be hosted in China due to various reasons including, but not limited to, compliance such as Multi-Level Protection Scheme (MLPS), data residency, Baidu SEO, etc. Adobe Managed Services for Adobe Experience Manager are available in China.\n\nPrevious\n\nGlobal Availability\n\nUp Next\n\nSecurity","lastModified":"1771843053","labs":""},{"path":"/docs/global","title":"Global Availability","image":"/docs/media_19919346e192f09333613dc67d53a8dc82b3d5f17.png?width=1200&format=pjpg&optimize=medium","description":"Adobe Experience Manager is globally distributed and fully redundant","content":"style\ncontent\nhttps://main--helix-website--adobe.aem.page/docs/global.json\nGlobal Availability\n\nThe Edge Delivery Services architecture for Adobe Experience Manager makes use of redundant content delivery networks (CDNs) to ensure high availability with low latency. The cacheable components of your sites are delivered directly from globally-distributed points of presence (POPs).\n\nYour sites, globally available\n\nThe two highest priorities in delivering web experiences are availability and performance. For this reason, we use multiple content delivery networks to provide the Adobe Experience Manager service. Multiple CDNs so that, in the rare case of a CDN outage, we can switch the full delivery stack to a separate CDN to provide continued high availability.\n\nEach CDN uses dozens to thousands of different points of presence (POPs): high-performance caching servers located at critical internet junctures in global locations, close to population centers. These POPs cache the content served for your websites and make it available to “nearby” visitors, minimizing the inherent network latency of interactions.\n\nOperational Telemetry is used to maintain awareness of site stability and availability over time.\n\nYour content, globally distributed\n\nIn order to ensure this global availability, your cacheable content has to be globally distributed to the same extent that it is published to global visitors. Each of the core storage areas of the AEM content hub (media, content, code) is therefore equally globally-distributed to be within reach of those global visitors.\n\nWhat about data residency?\n\nThe Edge Delivery Services architecture is designed for global publication of information that would normally appear on public sites. Site owners are responsible for ensuring that the information they choose to publish is in compliance with applicable laws or regulations. Likewise, the data we collect for Operational Telemetry is deliberately limited for the sake of privacy and compliance.\n\nAnd holiday or event readiness?\n\nIf you are expecting traffic spikes due to seasonal or regional events such as public holidays, major promotions, sports, or cultural events then there is no need for advance capacity planning, advance notification, or request for assistance. The edge delivery network is able to handle this traffic any time of the year.\n\nPrevious\n\nSecurity\n\nUp Next\n\nChina","lastModified":"1758546917","labs":""},{"path":"/docs/staging","title":"Staging & Environments","image":"/docs/media_1653b9783b3ef54a8735c692392209e6b06f65ba9.png?width=1200&format=pjpg&optimize=medium","description":"With testing environments for each branch, do you really need a staging environment?","content":"style\ncontent\n\nStaging & Environments\n\nSeparating environments for testing and production is an important practice to ensure high availability of your site, which is why AEM provides each developer with a way to test work on their branch that is separate from the production environment on the main branch.\n\nMany organizations also require in their guidelines the creation of a dedicated staging environment. In this guide we discuss the role of staging environments and best practices for implementing them.\n\nYou probably don’t need a staging environment\n\nEach repository running on *.aem.live has as many testing environments as there are branches. As long as you enable the GitHub branch protection rules require a pull request before merging, require status checks to pass before merging, and require branches to be up to date before merging, each of your feature branches will be up to date with the main branch before it can be merged and will accurately reflect the nature of your published site.\n\nThe practice of setting up a staging environment emerged at a time when computing resources were scarce, and an entire development team had to share a single environment to test their changes. This could lead to adverse effects, for instance a breaking bug in one branch could block all other development from progressing.\n\nWith dedicated branches for each feature and fix, and a virtually unlimited number of preview environments for these branches, this is no longer an issue and the workaround of a dedicated staging environment is no longer needed.\n\naem.page is not a staging environment\n\nIn content management, discussions of staging environments are complicated by the fact that authors want to preview their content before being published and developers want to preview their code changes before being merged. With the separation of *.aem.page for content previews and *.aem.live for published content, we provide this ability to authors.\n\nIn order to make previewing content as fast as possible, *.aem.page uses different caching rules from *.aem.live, with one being optimized for immediacy, the other for high cacheability.\n\nThis means that *.aem.page and *.aem.live behave differently, and consequently *.aem.page should not be used as a staging environment for code. It is a preview environment for content.\n\nYou can preview code in all places\n\nAs you sometimes need to preview code that refers to content that has not yet been published, the *.aem.page environment supports the same branch-based creation of testing environments.\n\nWhen you should set up a staging environment\n\nFor the vast majority of customers, rule 1: “you probably don’t need a staging environment” applies. In case you are applying complex configurations, rewrite rules, or even custom edge code in your content delivery network (CDN), it is highly advisable to set up a staging environment for the CDN.\n\nThe following tips apply:\n\nUse a hostname that follows the pattern of your main website. If your main site is on www.example.com then use stage.example.com as the hostname\nUse your main branch on *.aem.live as the origin – your intent is to test the CDN, not AEM-specific code which has been tested on the *.aem.live testing environment. You want to work with real content and accurate caching rules, so use *.aem.live as the origin\nFlush caches deliberately. Through the built-in CDN integration, publishing content to *.aem.live and merging code into the main branch will surgically purge the cache to your production CDN. You should ensure that your staging CDN will be purged when appropriate\n\nSetting up a staging CDN is something that is only advised to a small subset of customers. An even smaller subset will have the need to test the interaction between client-side code and CDN configuration.\n\nFollowing the guideline that there should be as many staging “environments” as there are active branches, it is recommended to set up a staging CDN that mirrors the *.aem.live URL structure, so that ref.examplestage.com will point to ref–-site--example.aem.live., allowing each developer to access a dedicated testing environment that combines client-side code and CDN-side configuration in one accessible place.\n\nFollowing the reasoning above, creating a separate staging branch and mapping this to a separate CDN site is not advisable.\n\nPrevious\n\nArchitecture Overview\n\nUp Next\n\nSecurity","lastModified":"1727180048","labs":""},{"path":"/developer/folder-mapping","title":"Folder Mapping","image":"/developer/media_14345012e0419f1ec5ddd302eab00a7cf60fdc7d1.png?width=1200&format=pjpg&optimize=medium","description":"Folder mapping should only be used in cases where SEO and GEO play no role. This includes authenticated sites, or SPAs that display content that ...","content":"style\ncontent\n\nFolder Mapping\n\nFolder mapping should only be used in cases where SEO and GEO play no role. This includes authenticated sites, or SPAs that display content that should not be indexed.\n\nSince Folder Mapping causes a number of issues beyond SEO, including an infinite URL space that serves 200, the Folder Mapping is feature flagged to prevent accidental misuse.\n\nAnti-Patterns\n\nFolder mapping should not be used for:\n\nSites that have SEO/GEO needs, specifically product information that requires JSON-LD in the initial payload.\nSites using a combination of a large number of folder-mapped pages and frequently changing metadata. For such use cases, it is better to provide the content via different methods. Talk to your Adobe contact to learn more.\nMapping of excessively dynamic or infinite URLs like /search/<query>, dynamic search results are better served via query parameters or URL hash property\nGenerating non-cacheable content like user specific URLs. This can lead to cache pollution and possible security risks if content is unintentionally cached.","lastModified":"1772810999","labs":""},{"path":"/developer/franklin-video-series-git-repo-setup","title":"Franklin Getting Started Episode 3 Addendum","image":"/developer/media_17531c5817dba9e27ed6963d25d92986d72d70014.jpeg?width=1200&format=pjpg&optimize=medium","description":"","content":"style\ncontent\n\nFranklin Getting Started Episode 3 Addendum\nConfigure MS Visual Code to work with your GIT Repo.\n\nIn this video, you will learn:\n\nHow to configure your GIT repo locally Access your repo from MS Visual Code\nhttps://smartimaging.scene7.com/is/content/DynamicMediaNA/franklinonboarding/FranklinAddendumEpisode2_2.mp4\nVideo Resources:\nDeveloper Tutorial The Anatomy of a Franklin Project Terminal Command to set username and password\ngit config user.name = \"user\"\n\ngit config user.password = \"password\"\n\n\nBack to Franklin Video Series\n\nPrevious\n\nFranklin Fundamentals Ep: 2\n\nUp Next\n\nFranklin Fundamentals Ep: 4","lastModified":"1687891297","labs":""},{"path":"/docs/network-profile","title":"Network Profile","image":"/docs/media_172c0e6ce73b5b5798586d9ad60b04ed7ed933cf7.png?width=1200&format=pjpg&optimize=medium","description":"Take a deep dive into the network profile Adobe Experience Manager Services.","content":"style\ncontent\n\nNetwork Profile\nArchitecture\n\nThe AEM Delivery services are set up as origins behind Content Delivery Network (CDN) infrastructure for production and are accessed by authors, stakeholders and developers directly from their browsers. In some cases the CDN infrastructure for production sites is hosted and managed by AEM customers directly (we call this BYO CDN) or it is managed by Adobe via a Cloud Service CDN (eg. BYO DNS).\n\nThe network profile below is relevant both for the interactions of end-users directly with their browsers (or other clients) but also for setup and communication from your CDN.\n\nDelivery Services\nDNS Scheme\n\nWe use a DNS scheme that identifies each origin with the following pattern. https://<ref>--<site>--<org>.aem.page (for preview) and https://<ref>--<site>--<org>.aem.live (for live content). <ref>, <site> and <org> commonly identify GitHub repositories and references (branches or tags). Special characters in references, orgs and repositories are replaced by a single -. The maximum length of a domain name label is 63 characters, which limits the length of the <ref>--<site>--<org> the combination.\n\nThe DNS records for aem.page and aem.live are delivered with a short 10-minute time to live (TTL), so that we can switch between delivery stacks. This happens automatically.\n\nHTTP and TLS versions\n\nThe AEM Delivery endpoints support HTTP/1.1 and HTTP/2 (H2) and TLS 1.2. HTTPS (via TLS) is enforced. All .page and .live subdomains additionally set HTTP Strict Transport Security (HSTS) headers, so that downgrades to HTTP are prohibited.\n\nHTTP Headers\n\nMany HTTP headers can be configured by the project or are dependent on the payload, but there are some headers that are handled out of the box.\n\nCache-Control\n\nThe client-facing cache-control headers are set automatically by default to reflect production tested, continuously evolving, best practices based on resource types, response code and origin type. We recommend passing them on to the client \"as is\" through a customer's CDN.\n\nSome resources that are immutable have very long max-age and others that tend to change more frequently have shorter max-age. On preview (.page) origins max-age is set very short, to avoid the explicit need for authors, developers and stakeholders to clear their browser cache when making changes.\n\nWe don't have any data that suggests that changing the cache-header values on a per customer basis provides improved experiences to visitors of the sites and we do have data that shows adverse effects from changes, but we appreciate that this is a contract between the browser and a customer's CDN and understand that this is out of scope for AEM to fully control.\n\nIt is important to note that these values apply only to the cache-control headers sent to the client, and are not related to cache management in the CDN, which is using push invalidation.\n\nCache-Control Header Values on .aem.page\nResponses for\t Header Value \n 200 - Code (.js, .css, .svg, etc.)\t max-age=60, must-revalidate \n 200 - Content (text/html, application/json)\t max-age=60, must-revalidate \n 200 - Media (images, videos, etc.)\t max-age=2592000, must-revalidate \n 301 - Moved Permanently\t max-age=60, must-revalidate \n 304 - Code\t max-age=60, must-revalidate \n 304 - Content\t max-age=60, must-revalidate \n 404 - Code\t max-age=60, must-revalidate \n 404 - Content\t max-age=60, must-revalidate \n 5xx\t max-age=60, must-revalidate \n 3xx, 4xx and 5xx - Media\t max-age=3600, must-revalidate\nCache-Control Header Values on .aem.live\nResponses for\t Header Value \n 200 - Code (.js, .css, .svg, etc.)\t max-age=7200, must-revalidate \n 200 - Content (text/html, application/json)\t max-age=7200, must-revalidate \n 200 - Media (images, videos, etc.)\t max-age=2592000, must-revalidate \n 301 - Moved Permanently\t max-age=7200, must-revalidate \n 304 - Code\t max-age=7200, must-revalidate \n 304 - Content\t max-age=7200, must-revalidate \n 404 - Code\t max-age=7200, must-revalidate \n 404 - Content\t max-age=7200, must-revalidate \n 5xx - Code and Content\t max-age=7200, must-revalidate \n 3xx, 4xx and 5xx - Media\t max-age=3600, must-revalidate\nVary\n\nThe vary header is set to Accept-Encoding,X-Forwarded-Host\n\nX-Robots-Tag\n\nThe x-robots-tag header is automatically set to noindex, nofollow on any .live and .page origin to avoid indexing. This header is removed by the CDN tier for production only.\n\nCDN Specific Headers\n\nTo manage the cache consistency with your CDN in an optimal way and support precise push invalidation there are custom headers set for each supported CDN that control the CDNs caching behavior and cache keys. The terminology and headers as well as the available features and semantics vary greatly between vendors.\nThese headers are only added for requests coming from a CDN and are consumed by the CDN and are not surfaced to the browser or other clients.\n\nURL Space\n\nThe available URL space on the .page and .live origins is limited to a combination of upper and lowercase basic latin letters (A-Z and a-z), numbers (0-9), dash (-), underscore (_), period (.) and forward slash (/). Certain combinations of . and/or / in direct succession are also not valid.\n\nIf you need to service a broader URL space we recommend rewriting URLs on your CDN tier.\n\nAccess to the available URL Space\n\nDevelopers and Authors have access to the full URL space via resources coming from GitHub, Folder names or redirect sources coming from the redirects spreadsheet.\n\nAccess to a more limited URL Space\n\nFile names in content sources (documents and spreadsheets in Sharepoint or Google Drive) are automatically rewritten to a more narrow character set including only lowercase basic latin letters (a-z), numbers (0-9) and dash(-) with the corresponding extension appended.\n\nAdmin Service (API)\n\nThe admin service endpoint available on admin.hlx.page (see API Spec here) is built to be accessed via a broad range of HTTP clients including browsers, command line tools as well as common HTTP clients.\n\nIt supports HTTP/1.1 and HTTP/2 (H2) with TLS 1.2. HTTPS (via TLS) is enforced.","lastModified":"1730430721","labs":""},{"path":"/developer/change-site-root","title":"Change Site Root","image":"/developer/media_15282d0c76e2c630cf008d927887acfa0dc92012a.png?width=1200&format=pjpg&optimize=medium","description":"The AEM Boilerplate project assumes that the document root is situated at the project's root. However, if your project begins on a subpage of another ...","content":"style\ncontent\n\nChange Site Root\n\nThe AEM Boilerplate project assumes that the document root is situated at the project's root. However, if your project begins on a subpage of another site, it is necessary to adjust the site root to align with the path.\n\nModifying the site root involves relocating various files and folders in both the document storage and code repository, along with making code updates where paths are referenced. The new folder should have identical names in both the document storage and GitHub and it should also correspond to the name in the path where the site will be located.\n\nThat means if the site location will be at https://yourdomain.com/topics then the recommended folder name would be topics.\n\nNote: If you plan to move multiple sections of the site over at different times, then follow the “Code Changes” section but move it to a unique folder name (that does not clash with any existing paths you may already have) like aemedge so that it is not tied to just one section of the site content.\n\nContent Changes\n\nTo update your document storage, follow these steps:\n\nCreate the new root folder in document storage\nMove all items into the new folder, excluding the following:\n.helix folder\nredirects(.xslx)\nblock-library folder (if it exists)\nPublish the relocated files in their new location\n\nIMPORTANT: In case any files in the previous location have been previewed or published, it is advisable to first delete the published content. These outdated files can be seen as duplicate content, will probably not get updates from authors and confuse visitors who stumble upon them.\n\nCode Changes\n\nTo update your project code, follow these steps:\n\nCreate the new root folder in your code, using the identical folder name used in document storage\nMove the following folders into the new folder:\nblocks\nfonts\nicons\nscripts\nstyles\nUpdate the paths to styles and scripts in the following files:\nhead.html\n404.html\nIn font.css, update the paths to your font files in the src properties of @font-face definitions\nSince the fonts folder was moved, these paths may need to be updated\nIn package.json, update the lint:css path for both blocks and styles\n\nNote: window.hlx.codeBasePath is set automatically by the AEM Boilerplate code, so you do not need to explicitly define it in your scripts.js.\n\nOther things to look for\n\nDepending on your stage of development, you might have to identify additional instances where paths are referenced.\n\nLook for fetchPlaceholders() calls. Pass your folder name as an argument\ne.g. fetchPlaceholders('/your-folder-name')\nLook for loadCSS() or loadBlock() calls\nIn some cases the window.hlx.codeBasePath may not have been used\nLook for HTML snippets that are included in blocks\nYou can incorporate HTML snippets into your custom code. If the snippet contains relative paths, it may be necessary to update them\nLook for any custom Git workflows\nIn a workflow, file relocations may occur, necessitating the correction of paths as needed","lastModified":"1762973638","labs":""},{"path":"/docs/setup-googledrive","title":"How to use Google Drive","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Edge delivery service will access the google drive using a registered user during preview and publishing. You can choose to either use the default helix@adobe.com ...","content":"style\ncontent\n\nHow to use Google Drive\n\nEdge delivery service will access the google drive using a registered user during preview and publishing. You can choose to either use the default helix@adobe.com user or register your own.\n\nIf you choose the default user, please follow the steps highlighted in the tutorial.\nIf you want to use a custom user, please read Setup Customer Google Drive","lastModified":"1725864574","labs":""},{"path":"/developer/block-collection/breadcrumbs","title":"Breadcrumbs","image":"/developer/block-collection/media_149fe08afe1cb9961a41d4b08682d45d1f90a3b20.jpg?width=1200&format=pjpg&optimize=medium","description":"Breadcrumbs are a list of page titles and relevant links showing the location of the current page in the navigational hierarchy.","content":"style\ncontent\n\nBreadcrumbs\nNotes:\n\nBreadcrumbs are a list of page titles and relevant links showing the location of the current page in the navigational hierarchy.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nSee Content in Document\n\nCode:\n\nThis code is included in the header block in AEM Block Collection, simply copying the .css file and the .js file will add this block to your project.\n\nBlock Code\n\nPrevious\n\nFragment\n\nUp Next\n\nBlock Party","lastModified":"1772010425","labs":""},{"path":"/developer/block-collection/search","title":"Search","image":"/developer/block-collection/media_149fe08afe1cb9961a41d4b08682d45d1f90a3b20.jpg?width=1200&format=pjpg&optimize=medium","description":"Search allows users to find site content by entering a search term. If a content source is not provided, the site’s /query-index.json will be used.","content":"style\ncontent\n\nSearch\nNotes:\n\nSearch allows users to find site content by entering a search term. If a content source is not provided, the site’s /query-index.json will be used.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nSee Content in Document\n\nCode:\n\nThis code is included in AEM Block Collection, simply copying the .css file and the .js file will add this block to your project.\n\nBlock Code\n\nPrevious\n\nFragment\n\nUp Next\n\nBlock Party","lastModified":"1772012659","labs":""},{"path":"/developer/block-collection/modal","title":"Modal","image":"/developer/block-collection/media_149fe08afe1cb9961a41d4b08682d45d1f90a3b20.jpg?width=1200&format=pjpg&optimize=medium","description":"A modal is a popup that appears over other site content. It requires a click interaction from the user to open, and another interaction to ...","content":"style\ncontent\n\nModal\nNotes:\n\nA modal is a popup that appears over other site content. It requires a click interaction from the user to open, and another interaction to close before they can return to the site underneath.\n\nThe modal is not a traditional block. Instead, links to a /modals/ path automatically create a modal.\n\nExample:\n\nSee Live output\n\nContent Structure:\n\nSee Content in Document\n\nCode:\n\nThis code is included in AEM Block Collection, simply:\n\ncopy the .css file and the .js file and add to your project.\ncopy the autoLinkModals() function and add to your scripts.js file\n\nBlock Code\nScripts Code\n\nPrevious\n\nFragment\n\nUp Next\n\nBlock Party","lastModified":"1772012432","labs":""},{"path":"/developer/configuring-aem-assets-sidekick-plugin","title":"Configuring Adobe Experience Manager Assets Sidekick Plugin","image":"/developer/media_1cc54f37e3f8f2578803fe1a1a3b92f27612a507c.png?width=1200&format=pjpg&optimize=medium","description":"Learn how to configure the Adobe Experience Manager Assets Sidekick plugin, so you can use assets from your Experience Manager Assets repository while authoring documents ...","content":"style\ncontent\n\nConfiguring Adobe Experience Manager Assets Sidekick Plugin\n\nLearn how to configure the Adobe Experience Manager Assets Sidekick plugin, so you can use assets from your Experience Manager Assets repository while authoring documents in Microsoft Word or Google Docs.\n\nImportant Note: For using assets while authoring a page using Universal Editor based WYSIWYG authoring, please refer to Universal Editor Custom Asset Picker. The documentation here is specifically for document based authoring experience.\n\nFor information on the authoring experience while using the Sidekick plugin, please see Adobe Experience Manager Assets Sidekick Plugin.\n\nNote: Using non-image assets such as Videos and PDF documents is in an Early Adopter program. Find more details here.\n\nConfigure Your Sidekick\n\nTo enable the Assets plugin in your Sidekick, you must create a configuration in your project’s GitHub.\n\nOpen your project in GitHub and locate the Sidekick configuration file at tools/sidekick/config.json.\nIf the file does not yet exist, create it.\nFor more details, please refer to this document on extending AEM Sidekick.\nIn your config.json file, you must add an asset-library section.\nYou can either add the asset-library section to your existing configuration, or replace your configuration with the example provided.\nPlease note that you should provide a title for the plugin that website authors will understand (“My Assets” in the example).\n{\n    \"project\": \"franklin-asset-selector\",\n    \"host\": \"www.mymydomain.prod\",\n    \"plugins\": [\n        {\n            \"id\": \"asset-library\",\n            \"title\": \"My Assets\",\n            \"environments\": [\n                \"edit\"\n            ],\n            \"url\": \"https://experience.adobe.com/solutions/CQ-helix-assets-addon/static-assets/resources/asset-selector.html\",\n            \"isPalette\": true,\n            \"includePaths\": [\n                \"**.docx**\"\n            ],\n            \"passConfig\": true,\n            \"paletteRect\": \"top: 50px; bottom: 10px; right: 10px; left: auto; width:400px; height: calc(100vh - 60px)\"\n        }\n    ]\n}\n\nCommit the configuration to your project's Github repo.\nOpen up the AEM sidekick in a Microsoft Word or Google Doc project document. You should see the My Assets button. The label of the button will depend on your configuration.\n\nAdvanced Configurations\n\nYou can optionally customize the Assets Sidekick plugin by specifying query parameters in the asset selector URL in the Sidekick config.json file. The following parameters are supported:\n\nrail (optional)\n\nPermissible Values: true/false\n\nUse this parameter to toggle the rail view of the asset selector. By default, the rail view is used, which is more compact and offers search-only experience. If you set the parameter to ?rail=false, a tree view left panel is shown with the repository folder hierarchy allowing users to browse for assets in addition to searching.\n\n{\n    \"id\": \"asset-library\",\n    \"title\": \"My Assets\",\n    \"environments\": [ \"edit\" ],\n    \"url\": \"https://experience.adobe.com/solutions/CQ-helix-assets-addon/static-assets/resources/asset-selector.html?rail=false\",\n    \"isPalette\": true,\n    \"includePaths\": [ \"**.docx**\" ],\n    \"passConfig\": true,\n    \"paletteRect\": \"top: 50px; bottom: 10px; right: 10px; left: auto; width:800px; height: calc(100vh - 60px)\"\n}\n\n\nPlease note that the folder hierarchy view occupies additional screen space. To accommodate it, you may have to increase the width of the asset selector panel.\n\nThe Asset Selector component leverages micro-frontend Asset Selector that is instrumented to integrate with Sidekick. To learn more, please see the Micro-Frontend Asset Selector documentation.\n\nextConfigUrl (optional)\n\nUse this parameter to extend the Assets Sidekick by specifying the complete URL where the configuration is hosted. This is useful for scenarios where the configuration resides at a different location than the project's domain. To configure extConfigUrl, include it in the sidekick JSON configuration of the extension.\n\n\t{ \n  \t\"id\": \"asset-library\", \n  \t\"title\": \"AEM Assets Library\", \n  \t\"environments\": [\"edit\"], \n  \t\"url\": \"https://experience.adobe.com/solutions/CQ-helix-assets-addon/static-assets/resources/asset-selector.html?extConfigUrl=https://<your-config-server>/config.json\", \n  \t\"isPalette\": true, \n  \t\"includePaths\": [\"**.docx**\"], \n  \t\"passConfig\": true, \n  \t\"paletteRect\": \"top: 50px; bottom: 10px; right: 10px; left: auto; width:400px; height: calc(100vh - 60px)\" \n\t} \n\n\nEnsure that the URL supplied in parameter extConfigUrl , is accessible and that cross-origin resource sharing (CORS) is enabled for the domain experience.adobe.com.\n\nIf the configuration is hosted on an Edge Delivery Services project, refer to the documentation for detailed instructions on configuring CORS for hosted configurations.\n\nExtension Points\n\nYou can customize the AEM Assets Sidekick plugin using extension points to meet your custom requirements. These extension points enhance the plugin's functionality and adapt it to your organization's unique demands.\n\nThis section details the extension points available and provides example configurations. See the following section, Configure Extension Points, for details on how to implement the extensions.\n\nblockName\n\nThis extension point defines the block name for different MIME types.\n\nExample Configuration (Video Type Assets):\n{ \n\t\"blockName\": [ \n    \t{ \n        \t\"mimeType\": \"video/*\", \n        \t\"value\": \"Core Embed\" \n    \t} \n\t] \n}  \n\n\nThis configuration allows users to change the Block Name for Video type assets from default “Embed” to “Core Embed”.\n\n\ncopyMode\n\nThis extension point defines the copy mode for different MIME types.\n\nExample Configuration (Image Type Assets):\n{ \n\t\"copyMode\": [ \n    \t{ \n        \t\"mimeType\": \"image/*\", \n        \t\"value\": \"reference\" \n    \t} \n\t] \n} \n\n\nThis configuration enables users to pick images from the Assets Selector as a link when pasting it into a Word document.\n\nNote: Once the config is enabled, you’d also need to ensure appropriate handling in your site’s front-end code to convert such image links to <picture/> tag with appropriate srcset. Find more details here.\n\nblockTemplate\n\nThis extension point defines the template structure used for displaying video blocks, allowing flexibility in the structure of the block that gets copied to the Word document to align it with the block structures used in your site authoring.\n\nExample Configuration (Video Type Assets):\n\nThe following example demonstrates how to configure the blockTemplate for video assets:\n\n{ \n\t\"blockTemplate\": [\n \t{\n   \t\t\"mimeType\": \"video/*\",\n   \t\t\"value\": \"<table border='1' style=\\\"width:100%\\\">\\n  <tr>\\n        <td style=\\\"background-color:#2986cc;color:#fff\\\">${blockName}</td>\\n      </tr>\\n      <tr>\\n        <td>\\n        <img src=\\\"${posterUrl}\\\" alt=\\\"${name}\\\">\\n        <br/>\\n        <a href=\\\"${videoUrl}\\\">${name}</a>\\n        </td>\\n      </tr>   </table>\"\n \t}\n]\n} \n\n\nThis configuration would result in a block like the following example copied to the Word document when the asset is pasted. It has the poster image and the video URL and the block’s header row has been styled.\n\nVariables Used:\nblockName: Represents the name of the video block\nposterUrl: Contains the URL of the image that serves as the video poster or thumbnail\nname: Denotes the name of the video asset\nvideoUrl: Provides the public delivery URL of the video file\nfilterSchema\n\nThis extension point allows you to set up custom search filters to make it easier to find and narrow down assets based on its metadata, such as tags, status, type etc.\n\nExample Configuration (Adding Tag Picker to Rail Filter Schema):\n{ \n\t\"filterSchema\": [ \n    \t{ \n        \t\"header\": \"Assets Tags\", \n        \t\"groupKey\": \"AssetTagsGroup\", \n        \t\"fields\": [ \n            \t{ \n                \t\"element\": \"taggroup\", \n                \t\"name\": \"property=metadata.application.xcm:keywords.id\", \n                \t\"columns\": 3 \n            \t} \n        \t] \n    \t} \n\t] \n} \n\n\nThis configuration allows users to fetch the taxonomy from AEM and pick and apply tags from that AEM taxonomy to filter the assets.\n\nThe name attribute should be changed depending on the repository.\n\nFor the delivery repository: property=metadata.application.xcm:keywords.id\nFor the authors: property=xcm:keywords.id=\nassetDomainMapping\n\nThis extension point defines domain mappings, allowing domains to be replaced per your configuration. If a domain mapping doesn’t exist, the original delivery URL is retained.\n\nExample Configuration (Asset Domain mappings):\n{\n \t\"assetDomainMapping\": {\n    \t  \"delivery-p123111-e1235123.adobeaemcloud.com\":\"mediapreprod.store.testdomain.com\",\n\"delivery-p123111-e1235134.adobeaemcloud.com\": \"media.store.testdomain.com\",\n\"delivery-p123111-e1235145.adobeaemcloud.com\": \"mediauat.store.testdomain.com\"\n\"delivery-p123111-e1235156.adobeaemcloud.com\": \"mediaqa.store.testdomain.com\"\n \t}\n}\n\n\nThis configuration replaces any delivery domain with its corresponding CDN-mapped or custom domain.\n\nConfigure Extension Points\n\nCustomization of the Assets Sidekick plugin including its extension points is done by the sidekick’s project configuration file. There are two options for defining this configuration file:\n\nOption 1: Host Configuration in the Same Project\n\nIn this option, the configuration file should be defined at the following path within your Edge Delivery Services project:\n\n/tools/assets-selector/config.json\n\nAdditionally, ensure that the Assets Sidekick Plugin configuration includes the following parameter set to true:\n\npassConfig=true\n\nThis configuration is the same as described in the section Configure Your Sidekick.\n\nOption 2: Host Configuration Externally and Pass It to the Assets Sidekick Plugin\n\nYou can host the configuration file at any accessible URL and pass it to the Assets Sidekick plugin. You do this by utilizing the extConfigUrl parameter to specify the URL from which the configuration will be fetched. This configuration is the same as described in the section Advanced Configuration.\n\nConfiguration Based on Page Context\n\nThe configuration can take a page’s URL as a parameter, allowing you to generate the configuration on the fly based on the page’s context. This enables the Assets Selector to use the page context when suggesting relevant assets. By doing so, you can set up configurations dynamically in the Assets Selector based on the context (the page URL) passed to it\n\nThis is particularly useful for scenarios where you want page authors to be shown only a subset of assets from your assets repository based on the context of the web page they are authoring such as showing locale or region-specific assets based on the locale of the page authored.\n\nTo implement this, follow these steps:\n\nEnsure the Sidekick configuration has both passReferrer and passConfig flags set to true.\nHost a configuration that can be served by the following endpoint:\n\nhttps://<your-config-server>/config.json?webPath=<page-path>\n\nHere, page-path is the URL of the page being edited in the Word or Google Doc document, and the Assets Selector should launch in the context of that page-path. This endpoint should serve the complete configuration. It is expected that you implement the necessary server side logic at this endpoint to serve the configuration based on the provided context (the webPath) in the specified format accepted by the asset selector\n\nFor example, the endpoint can look at the webPath and return a configuration which uses the page’s tags at that webPath, to set the default tags in the Tag Picker filter of the Assets Selector. In this way, the Assets Selector launches with those default tags applied, showing relevant assets based on the page's context.\n\nSample Configuration for Reference:\n\nFor more detail and reference configurations, please see the readme of the Assets Selector to help you adapt and deploy these configurations in your project based on your use cases and extension requirements.","lastModified":"1736616596","labs":"AEM Assets"},{"path":"/docs/authentication-setup-site","title":"Configuring Site Authentication","image":"/docs/media_184e8fddf29290e971f328b001a01ffcf138ebdef.png?width=1200&format=pjpg&optimize=medium","description":"Learn how to enable visitor authentication on an AEM site.","content":"style\ncontent\n\nConfiguring Site Authentication\n\nAEM Live supports token-based authentication. Site authentication is usually applied to both the preview and publish sites, but can also be configured to only protect either site individually.\n\nWarning\n\nEnabling Site Authentication for the publish sites (*.aem.live) will enforce authentication for all your site visitors (intranet). It will also prevent automatic PSI (Page Speed Insights) checks from running on your pull requests in GitHub. For use cases where your BYO CDN should use no (or different) authentication from your .live origin, you will need to configure preview only authentication or bypass authentication with an API_KEY.\n\nLimitations\nOnly authentication is supported. Authorization is not supported.\nAuthentication can only be enabled or disabled for the entire site\nIt is not possible to create a custom error page for denied access\nEnable Authentication for the Preview and Publish Sites\n\nClick here to view instructions on how to update the configuration in document mode\n\n1. Enable the Configuration Service\n\nAll of the following instructions use the Configuration Service for your site, so follow the linked instructions to enable and authenticate, then perform the following API requests.\n\n2. Create a site token to access your protected site\n\nPOST an empty body to http://admin.hlx.page/config/{org}/sites/{site}/secrets.json the response will be a JSON object containing an id and value field. Remember both, you'll need them for the next steps:\n\ncurl -X POST https://admin.hlx.page/config/acme/sites/website/secrets.json \\\n  -H 'x-auth-token: <your-auth-token>'\n{\n  \"id\": \"SGFsbG8gVG9iaWFz\",\n  \"value\": \"hlx_ZGFzIGlzdCBkZWluIHRva2Vu\",\n  \"created\": \"2024-08-21T18:28:54.075Z\"\n}\n\n\nNote that you now have two tokens:\n\nThe auth token for the admin API obtained during login. This one is highly sensitive and cannot be used for site authentication.\nThe site token, which you just created, which can be shared with users and systems than need to access your site\n3. Enable the token to access your site\n\nPOST to http://admin.hlx.page/config/{org}/sites/{site}/access/site.json so that accessing your site on aem.page and aem.live requires the token value you've retrieved in the previous step.\n\ncurl -X POST https://admin.hlx.page/config/acme/sites/website/access/site.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: <your-auth-token>' \\\n  --data '{\n    \"allow\": [\"*@acme.com\", \"*@adobe.com\"],\n    \"secretId\": [\"SGFsbG8gVG9iaWFz\"]\n  }'\n\n{\n  \"allow\": [\"*@acme.com\", \"*@adobe.com\"]\n  \"secretId\": [\"SGFsbG8gVG9iaWFz\"]\n}\n\n\nThe response contains the content of the access.site object. Each POST request overwrites that object, so if you want to update any content, make sure to perform a GET request to the same URL before and include the original content.\n\nThe example above sets the site property which controls access to both aem.page and aem.live. This is the most restrictive approach. If you want to limit access to both aem.page and aem.live, POST to .../access/site.json. If you want to limit access to aem.page only, post to .../access/preview.json. In the unlikely case that you want to limit access to aem.live only, and keep aem.page, post to .../access/live.json.\n\nIf you have set tokens for site and either preview or live, then preview and live will override the site-wide settings.\n\n4. Verify that access has been limited\n\nWhen you open your site in your browser, you will see an HTTP 401 status code, indicating that no access is possible without authentication. Next, try to access the site and provide the token value:\n\ncurl https://main--website--acme.aem.live \\\n  -H 'authorization: token hlx_ZGFzIGlzdCBkZWluIHRva2Vu'\n\n\nIn this request we use the site token value in the Authorization header.\n\n5. Make your CDN pass the right Authorization header\n\nWith this change, nobody can access your site without the correct authorization header. This includes your CDN, and therefore every visitor to your site. To enable access again, you need to add the Authorization header to each origin request your CDN makes.\n\nThe CDN setup instructions explain how to enable the authorization header for each supported Content Delivery Network.\n\nBrowser Access to the Protected Sites\n\nAccessing protected sites directly from a browser requires users to have an appropriate role defined in the project configuration and to sign in using the AEM Sidekick Extension.\n\nExample\n\nThe following excerpt of the access object of a site configuration for acme/website enforces the following:\n\nProtects *--website--acme.aem.page\nAllows to access *--website--acme.aem.page with a site token that has the id SGFsbG8gVG9iaWFz\nDoes not protect *--website--acme.aem.live\nEnforces authentication of the admin API for acme/website\nAllows sidekick authenticated users that have a *@acme.com email to preview, publish, etc.\nAllows performing admin API operations with an admin token with the JWT jti 1kLvEvoipnINAOGDP8NVl3IYbJy2qmUQa5b1Fe23S7tt depending on the included scopes.\n{\n \"access\": {\n   \"preview\": {\n     \"allow\": [\n       \"*@acme.com\"\n     ],\n     \"secretId\": [\n       \"SGFsbG8gVG9iaWFz\",\n     ]\n   },\n   \"admin\": {\n     \"role\": {\n       \"publish\": [\n         \"*@acme.com\"\n       ]\n     },\n     \"requireAuth\": \"true\",\n     \"apiKeyId\": [\n       \"1kLvEvoipnINAOGDP8NVl3IYbJy2qmUQa5b1Fe23S7tt\"\n     ]\n   }\n }\n}","lastModified":"1743595530","labs":""},{"path":"/developer/font-fallback","title":"Font Fallback and CLS","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Learn how to create a font fallback","content":"style\ncontent\n\nFont Fallback and CLS\n\n…in the context of Core Web Vitals\n\nLoading a custom font (a font which is not known by default by all browsers) can easily introduce some delays in the loading sequence and a CLS (Cumulative Layout Shift) problem. This is true for the first page load on your site, until the font is cached by the browser.\n\nYou can try to preload and/or push the custom font, but there will always be a time where the browser will render the body with a default font. This is especially true on slow devices or on a slow connection. You can hide the full body until the font is loaded, but then the FCP (First Contentful Paint) and LCP (Largest Contentful Paint) scores will be highly penalized.\n\nYou can also use CSS tricks like font-display: optional, but the font might not be loaded at all on a slow connection: the browser has the instruction to ignore it after a certain (short) delay.\n\nThis last solution does not prevent the CLS score to be impacted: when swapping the font, the size of the content might change, and layout might “shift”.\n\nThere is another mechanism that can be used to prevent those 2 issues (font loading delay and CLS): use a font fallback.\n\nThe idea of the font fallback is to scale a browser default font so that when applied to the content of a page, the content takes up the same \"space\" with the font fallback and with the custom font. Like this, you can swap one with the other without a layout shift. If you use a default font that looks “similar” enough to the custom font, those few milliseconds of first-time page load will look really close to the experience you want to provide with the custom font. You can then defer the custom font loading or at least make it non-blocking in the loading sequence.\n\nThe font-fallback extension\n\nWe have built a chrome extension to help compute the font fallback.\n\nWhen you open the extension on a page, it analyzes the font faces in use on that page: a font face is a combination of font family and font weight. Now for each of those font faces, you can select a default browser font and run the font fallback computation. For each font face, the extension computes which size-adjustment of the default browser font is required so that the content has the same width with both fonts.\nThe computation happens only on the width because the height should be controlled by the line-height: you should set a line-height on your site to not let the font decide what the height of the text is. This is true for paragraphs, headings, lists…\n\nNote: with font kits, there is a tendency to load many font faces that are never used (or used somewhere else on the site than on the current page). The extension shows also how many are loaded in total. It is also a good indicator if some optimization can be done: optimize the font kit by reducing the font faces it delivers, you can save a lot of bytes to transfer!\n\n\n\nOnce the computation work is done, you can copy the computed CSS and paste it in your main CSS.\n\nThe generated font families must be added after the custom font in your CSS font-family: the browser will initially not find the custom font, use the fallback until, later, the custom font is loaded. Since they use the same space, the swap does not generate CLS!\n\nTo fine tune the font adjust value, you can also use the simulation panel.\n\nChecking the box replaces the custom font by the computed fallback font directly in the page. The text input allows you to change the size-adjust value and immediately visualize the result (you need to toggle the checkbox each time you change the value).\n\nEdge cases\n\nThe extension makes a calculation based on an existing site, with existing content and CSS rules. In a different context, the size-adjust may not fit perfectly and the font swap may still produce CLS:\n\nContent dependency: the text iiiii has a smaller width than mmmmm for the same number of characters. The number of m in the text has a bigger influence on the width. But you can find a size-adjust that matches the average text by running the extension on multiple pages and taking an average of the size-adjust.\nSmall container: if your text is in a small container (especially true on mobile) with a restricted width and height, the space allowed to the text is then limited and the computation of the fallback might need to be more precise, i.e. made on this exact text.\n\nYou will find more technical details and other tools directly In the extension repository: https://github.com/adobe/helix-font-fallback-extension","lastModified":"1725864574","labs":""},{"path":"/developer/target-integration","title":"Configuring Adobe Target Integration","image":"/developer/media_1ba5390c0b3026987033e62fe020d3f7dae6c0332.png?width=1200&format=pjpg&optimize=medium","description":"This article will walk you through the steps of setting up an integration with Adobe Target so you can personalize your pages via the Adobe ...","content":"style\ncontent\n\nConfiguring Adobe Target Integration\n\nThis article will walk you through the steps of setting up an integration with Adobe Target so you can personalize your pages via the Adobe Target Visual Experience Composer (VEC). Before you go further, please also check our native Experimentation capabilities.\n\nWe support 2 different ways of integrating with Adobe Target. The recommended future-proof approach is via the Adobe Experience Platform WebSDK, but we also support the legacy Adobe Target at.js approach.\n\nConsiderations\n\nBefore we dive into the technical implementations, let us quickly recap how Adobe Target works so we set the proper expectations.\n\nAdobe Target lets you modify the content of an existing page based on the personalization parameters you define in Adobe Target Visual Experience Composer (VEC) or equivalent. The rules will be dynamically evaluated server-side and Adobe Target will deliver a list of page modifications that will be applied to the rendered page after the blocks have been decorated.\n\nThere are thus a few things to keep in mind:\n\nPage modifications are done on the final page markup after the blocks have been decorated. So if you want to change block behaviors based on modifications that Adobe Target will be doing (like setting a CSS class or attribute), you’ll have to leverage the MutationObserver API\nAdobe Target will be modifying blocks after they have been decorated and shown on the page. In most cases, this is not an issue as applying the modifications takes only a few milliseconds, but if you run complex code snippets this will likely trigger some page flickering and impact the user experience\nThe roundtrip to the Adobe Target backend services to obtain the list of page modifications that need to be applied is done during the eager phase and will impact the overall LCP. The first call to the endpoint is also typically slower, while subsequent calls will be on a warmed-up service with cached responses\nSince the instrumentation has an overhead, we recommend only enabling it on selected pages that are meant to be experimented on or personalized. The easiest is to add a page metadata, like Target: on that will act as a feature flag.\n\nIn our tests, you can expect a baseline performance impact as below. To this you’d also need to add the overhead of more complex page modifications, especially when using custom JavaScript snippets.\n\nMobile\n\t Largest Contentful Paint\t Total Blocking Time\t PageSpeed \n 1st call\t +1.3s\t 20~40ms\t 0~5 pts \n subsequent calls\t +0.1s\t 20~40ms\t 0~3 pts\nDesktop\n\t Largest Contentful Paint\t Total Blocking Time\t PageSpeed \n 1st call\t +0.5s\t 0~20ms\t 0~3 pts \n subsequent calls\t +0.3~0.5s\t 0~20ms\t 0~3 pts\nAdobe Experience Platform WebSDK\n\nTo enable Adobe Target integration in your website using the Adobe Experience Platform WebSDK (aka alloy), please follow these steps in your project:\n\nStart by following the steps to Use Adobe Target and Web SDK for personalization, and skip all steps related to actual instrumentation at the code level\nMake sure to note down the Adobe IMS Organization Identifier (orgId) as well as the Adobe Experience Platform Datastream Id (datastreamId, formally edgeConfigId) you want to use. You can get those following the WebSDK configuration documentation (orgId, edgeConfigId).\nIn your GitHub repository for the website, add the alloy.js file.\nThen edit your scripts.js file and add the following code somewhere above the loadEager method definition:\nfunction initWebSDK(path, config) {\n  // Preparing the alloy queue\n  if (!window.alloy) {\n    // eslint-disable-next-line no-underscore-dangle\n    (window.__alloyNS ||= []).push('alloy');\n    window.alloy = (...args) => new Promise((resolve, reject) => {\n      window.setTimeout(() => {\n        window.alloy.q.push([resolve, reject, args]);\n      });\n    });\n    window.alloy.q = [];\n  }\n  // Loading and configuring the websdk\n  return new Promise((resolve) => {\n    import(path)\n      .then(() => window.alloy('configure', config))\n      .then(resolve);\n  });\n}\n\nfunction onDecoratedElement(fn) {\n  // Apply propositions to all already decorated blocks/sections\n  if (document.querySelector('[data-block-status=\"loaded\"],[data-section-status=\"loaded\"]')) {\n    fn();\n  }\n\n  const observer = new MutationObserver((mutations) => {\n    if (mutations.some((m) => m.target.tagName === 'BODY'\n      || m.target.dataset.sectionStatus === 'loaded'\n      || m.target.dataset.blockStatus === 'loaded')) {\n      fn();\n    }\n  });\n  // Watch sections and blocks being decorated async\n  observer.observe(document.querySelector('main'), {\n    subtree: true,\n    attributes: true,\n    attributeFilter: ['data-block-status', 'data-section-status'],\n  });\n  // Watch anything else added to the body\n  observer.observe(document.querySelector('body'), { childList: true });\n}\n\nfunction toCssSelector(selector) {\n  return selector.replace(/(\\.\\S+)?:eq\\((\\d+)\\)/g, (_, clss, i) => `:nth-child(${Number(i) + 1}${clss ? ` of ${clss})` : ''}`);\n}\n\nasync function getElementForProposition(proposition) {\n  const selector = proposition.data.prehidingSelector\n    || toCssSelector(proposition.data.selector);\n  return document.querySelector(selector);\n}\n\nasync function getAndApplyRenderDecisions() {\n  // Get the decisions, but don't render them automatically\n  // so we can hook up into the AEM EDS page load sequence\n  const response = await window.alloy('sendEvent', { renderDecisions: false });\n  const { propositions } = response;\n  onDecoratedElement(async () => {\n    await window.alloy('applyPropositions', { propositions });\n    // keep track of propositions that were applied\n    propositions.forEach((p) => {\n      p.items = p.items.filter((i) => i.schema !== 'https://ns.adobe.com/personalization/dom-action' || !getElementForProposition(i));\n    });\n  });\n\n  // Reporting is deferred to avoid long tasks\n  window.setTimeout(() => {\n    // Report shown decisions\n    window.alloy('sendEvent', {\n      xdm: {\n        eventType: 'decisioning.propositionDisplay',\n        _experience: {\n          decisioning: { propositions },\n        },\n      },\n    });\n  });\n}\n\nlet alloyLoadedPromise = initWebSDK('./alloy.js', {\n    datastreamId: '/* your datastream id here, formally edgeConfigId */',\n    orgId: '/* your ims org id here */',\n  });;\nif (getMetadata('target')) {\n  alloyLoadedPromise.then(() => getAndApplyRenderDecisions());\n}\n\nAdjust the path to the library and set the correct values for your datastreamId, formally edgeConfigId, and orgId as per step 2.\nThen edit the loadEager method to:\nif (main) {\n    decorateMain(main);\n    document.body.classList.add('appear');\n    // wait for alloy to finish loading\n    await alloyLoadedPromise;\n    // break up possible long tasks before showing the LCP block to reduce TBT\n    await new Promise((res) => {\n      window.setTimeout(async () => {\n        // For newer AEM boilerplate, use this\n        await loadSection(main.querySelector('.section'), waitForFirstImage)\n        // For older AEM boilerplate versions, use this instead\n        // await waitForLCP(LCP_BLOCKS);\n        res();\n      }, 0);\n    });\n  }\n\nCommit and push your code\nSetup up an experiment in Adobe Target and preview the page\nAdd the Target metadata property to your page to trigger the instrumentation, or adjust the getMetadata condition in the code above to your needs. You can typically import getMetadata from aem.js or equivalent in your project if it isn’t yet available in your scripts.js\nIf the instrumentation is properly done, you should see a call to https://edge.adobedc.net/ee/v1/interact in your browser’s Network tab when you load the page. Whether the page is actually modified or not will depend on the configuration you set in Adobe Target\nYou are all done!\nAdobe Target at.js (legacy)\n\nTo enable Adobe Target integration in your website using the legacy at.js approach, please follow these steps in your project:\n\nStart by reading the Adobe Target at.js implementation documentation, and skip all steps related to actual instrumentation at the code level\nGo to https://experience.adobe.com/#/target/setup/implementation and note down your Client Code and IMS Organization Id\nIn your GitHub repository for the website, add the at.js file. We have an optimized version for AEM Edge Delivery Service that you can fetch at https://atjs--wknd--hlxsites.hlx.live/scripts/at.fix.min.js until it is made publicly available\nThen Edit your scripts.js file and add the following code somewhere above the loadEager method definition:\nfunction initATJS(path, config) {\n  window.targetGlobalSettings = config;\n  return new Promise((resolve) => {\n    import(path).then(resolve);\n  });\n}\n\nfunction onDecoratedElement(fn) {\n  // Apply propositions to all already decorated blocks/sections\n  if (document.querySelector('[data-block-status=\"loaded\"],[data-section-status=\"loaded\"]')) {\n    fn();\n  }\n\n  const observer = new MutationObserver((mutations) => {\n    if (mutations.some((m) => m.target.tagName === 'BODY'\n      || m.target.dataset.sectionStatus === 'loaded'\n      || m.target.dataset.blockStatus === 'loaded')) {\n      fn();\n    }\n  });\n  // Watch sections and blocks being decorated async\n  observer.observe(document.querySelector('main'), {\n    subtree: true,\n    attributes: true,\n    attributeFilter: ['data-block-status', 'data-section-status'],\n  });\n  // Watch anything else added to the body\n  observer.observe(document.querySelector('body'), { childList: true });\n}\n\nfunction toCssSelector(selector) {\n  return selector.replace(/(\\.\\S+)?:eq\\((\\d+)\\)/g, (_, clss, i) => `:nth-child(${Number(i) + 1}${clss ? ` of ${clss})` : ''}`);\n}\n\nasync function getElementForOffer(offer) {\n  const selector = offer.cssSelector || toCssSelector(offer.selector);\n  return document.querySelector(selector);\n}\n\nasync function getElementForMetric(metric) {\n  const selector = toCssSelector(metric.selector);\n  return document.querySelector(selector);\n}\n\nasync function getAndApplyOffers() {\n  const response = await window.adobe.target.getOffers({ request: { execute: { pageLoad: {} } } });\n  const { options = [], metrics = [] } = response.execute.pageLoad;\n  onDecoratedElement(() => {\n    window.adobe.target.applyOffers({ response });\n    // keeping track of offers that were already applied\n    options.forEach((o) => o.content = o.content.filter((c) => !getElementForOffer(c)));\n    // keeping track of metrics that were already applied\n    metrics.map((m, i) => getElementForMetric(m) ? i : -1)\n        .filter((i) => i >= 0)\n        .reverse()\n        .map((i) => metrics.splice(i, 1));\n  });\n}\n\nlet atjsPromise = Promise.resolve();\nif (getMetadata('target')) {\n  atjsPromise = initATJS('./at.js', {\n    clientCode: '/* your client code here */',\n    serverDomain: '/* your client code here */.tt.omtrdc.net',\n    imsOrgId: '/* your ims org id here */',\n    bodyHidingEnabled: false,\n    cookieDomain: window.location.hostname,\n    pageLoadEnabled: false,\n    secureOnly: true,\n    viewsEnabled: false,\n    withWebGLRenderer: false,\n  });\n  document.addEventListener('at-library-loaded', () => getAndApplyOffers());\n}\n\nAdjust the path to the library and set the correct values for the clientCode and imsOrgId as per step 2, and edit the serverDomain so the first part matches your client code.\nThen edit the loadEager method to:\nif (main) {\n    decorateMain(main);\n    document.body.classList.add('appear');\n    // wait for atjs to finish loading\n    await atjsPromise;\n    // break up possible long tasks before showing the LCP block to reduce TBT\n    await new Promise((resolve) => {\n      window.setTimeout(async () => {\n        // For newer AEM boilerplate, use this\n        await loadSection(main.querySelector('.section'), waitForFirstImage)\n        // For older AEM boilerplate versions, use this instead\n        // await waitForLCP(LCP_BLOCKS);\n        resolve();\n      }, 0);\n    });\n  }\n\nCommit your code\nSetup up an experiment in Adobe Target and preview the page\nAdd the Target metadata property to your page to trigger the instrumentation, or adjust the getMetadata condition in the code above to your needs. You can typically import getMetadata from aem.js or equivalent in your project if it isn’t yet available in your scripts.js\nIf the instrumentation is properly done, you should see a call to https://<client-code>.tt.omtrdc.net/rest/v1/delivery in your browser’s Network tab when you load the page. Whether the page is actually modified or not will depend on the configuration you set in Adobe Target\nYou are all done!","lastModified":"1760996869","labs":"AEM Sites"},{"path":"/docs/setup-customer-sharepoint-user","title":"How to use Sharepoint (delegated)","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"NOTE: for projects using Adobe’s Sharepoint (https://adobe.sharepoint.com) please continue here.","content":"style\ncontent\n\nHow to use Sharepoint (delegated)\n\nNOTE: for projects using Adobe’s Sharepoint (https://adobe.sharepoint.com) please continue here.\n\nIf you use SharePoint as your content source, AEM uses a registered Microsoft Azure application to do so. This application has delegated permissions defined that allow the service to access SharePoint on behalf of a user. This user needs to be registered to the project that is using SharePoint.\n\nAlternatively, the services can also authenticate as an application and use application permissions to access the sites. This needs additional setup by a SharePoint site administrator that can grant the permissions for the application.\n\nThe preferred setup is to use application permissions, as this narrows down the access the service has to a specific SharePoint site and does not require to share any secrets of a technical user. Also, it reduces the problems around password rotation. Please continue here for instruction on how to do so.\n\nThe following describes how to set up delegated permissions for your project.\n\nSetting up SharePoint involves the following steps:\n\nCreate a folder within SharePoint that will be the website root.\nCreate or define the (technical) user that will access the SharePoint content\nShare the website root folder with that user.\nConfigure the fstab.yaml with the respective folder\nRegister the user with the service\n1. Create the website root folder\n\nNavigate to your desired location in SharePoint and create a root folder that will be your website root. It is best to not use a SharePoint list root directly, so that you have a shared space for your authors to put collateral documents, for example a drafts folder, or how-to-author documentations.\n\nAn example file structure might look like this, using the website folder as the root:\n\n2. Create or define the user\n\nIt is best practice to use a generic (or technical) user to access the content on behalf of the service. This is better than using an employee user account because the exact scope of the files the user can access can be defined. Furthermore, there is no risk losing access to the files, should that employee leave the company.\n\nEvery company has different procedures to create technical users, so ask your IT department on how to do this.\n\n3. Share the website root folder\n\nNOTE: for projects using Adobe’s Sharepoint (https://adobe.sharepoint.com) please see here.\n\nEnsure that the service user has edit rights on the website root folder. This can be achieved easily by clicking on the … ellipsis menu and selecting “Manage Access”.\n\nAnd then add the generic (or technical) user user via the “Direct access” option.\n\n4. Configure the fstab.yaml\n\nThe next step is to configure the mountpoint in the fstab.yaml to point to the website root. It usually has the form of\n\nhttps://<tenant>.sharepoint.com/sites/<sp-site>/Shared%20Documents/website\n\n\nBut this might vary depending on how you create the SharePoint site and lists. In order to obtain the url, the simplest way is to copy-past the first part from the browser address, eg:\n\n\n\nAnd then add the rest manually (Note, that copying the sharelink via the UI adds unnecessary information and it is better to use a canonical representation of the url). Once you composed the url, you can test it by entering it again in the browser. You should end up in the folder view of your website root.\n\nAfter that, update the fstab.yaml accordingly.\n\nFor example:\n\nmountpoints:\n  /: https://adobeenterprisesupportaem.sharepoint.com/sites/hlx-test-project/Shared%20Documents/website\n\n\nTo finalize the configuration, commit the fstab.yaml back to the main branch.\n\n5. Register the user\nOverview\n\nIn order for the AEM service to access the authored content it needs a couple of information and setup. The AEM service (a cloud function) accesses the MS Graph API on behalf of a configured user. In order to do so, it needs to authenticate first in the context of an Application. This is important because the scopes given to the application define what permission the service has on the MS Graph API. For example, it should be allowed to read and write documents, but not to alter access control.\n\nAn application is represented as an “Enterprise Application” in the respective Active Directory of a tenant. The permissions given to that enterprise application ultimately define what the service can access in that tenant’s resources. Certain permissions need to be approved by an Active Directory administrator before a user can use the application. This so-called “admin consent” is a mechanism to verify and control which permissions apps can have. This is to prevent dubious apps from tricking users into trusting an app that is not official. Having the extra admin consent step allows IT security to control which apps the employees can use.\n\n1. Sign-in in the Registration Portal\nView Enterprise Applications in Azure Portal\n\nAssuming that so far no AEM Enterprise Applications are present in Azure (Microsoft Entra Id)\n\nAccess The Registration Portal\n\nGo to https://admin.hlx.page/register, enter the github url of the project or the org/site values of your site config.\n\nSign-in as non admin user\n\nSign in as a user that does not have admin permissions will show an error that it needs approval, i.e. the application needs admin consent.\n\nProblem: Enterprise Application is not registered if a user never logs in.\n\nSign-in as admin user\n\nOne solution is to sign in as a user that does have admin permissions:\n\n(note, at this point the Enterprise Application is still not registered in azure)\n\nAEM Content Integration Registration visible in UI\n\nIf the admin logs in (without checking the checkbox and granting consent for everyone), the Enterprise application is present.\n\nCreate application using MSGraph or Powershell\n\nAlternatively, you can create the Enterprise application via MSGraph or Powershell.\n\nIn order to make it visible in the azure UI you also need to add the WindowsAzureActiveDirectoryIntegratedApp tag. This can be done directly when creating the application.\n\nUsing graph explorer:\n\nPOST https://graph.microsoft.com/v1.0/servicePrincipals\nContent-type: application/json\n{\n    \"appId\": \"e34c45c4-0919-43e1-9436-448ad8e81552\",\n    \"tags\": [\n        \"WindowsAzureActiveDirectoryIntegratedApp\"\n    ]\n}\n\n\nUsing powershell:\n\nPS> connect-MgGraph -Scopes \"Application.ReadWrite.All\"\nPS> New-MgServicePrincipal -AppId e34c45c4-0919-43e1-9436-448ad8e81552 -Tags WindowsAzureActiveDirectoryIntegratedApp\n\n\nAfter that you still need to give admin consent, if you want a non admin user to login.\n\n\nAlso see:\n\nhttps://learn.microsoft.com/en-us/entra/identity/enterprise-apps/create-service-principal-cross-tenant\nhttps://learn.microsoft.com/en-us/entra/identity/enterprise-apps/add-application-portal-configure?pivots=ms-graph\n\nReview permissions\n\nNote that the AEM Content Integration Registration (e34c45c4-0919-43e1-9436-448ad8e81552) application is only needed during registration to verify that the user has read access to the sharepoint. It has the following delegated permissions:\n\nOpenid\nAllows users to sign in to the app with their work or school accounts and allows the app to see basic user profile information.\nProfile\nAllows the app to see your users' basic profile (e.g., name, picture, user name, email address)\nFiles.ReadWrite.All\nAllows the app to read, create, update and delete all files the signed-in user can access.\nUser logged in Registration portal\n\nAfter completing this initial step, the user is logged in the registration portal\n\nVerify write access to the content source via challenge file\nDownload the challenge file\n\nBefore you can register (or change) the registration. You have to prove that you have write access to the respective sharepoint location. For that you need to upload a text file containing the mentioned content. This can easily be done by downloading the file and drop it into the sharepoint folder.\n\nAfter that, click on Validate to continue the registration.\n\nConnect the technical User\nWith the permission properly granted, you should be able to login properly to the Registration Setup UI:\nClick on “Connect User” and you should see a new login window, where you want to login with your technical user. This the the AEM Content Integration is requesting for more permissions to access sharepoint:\nSimilar to above, and administrator needs to consent to the permissions:\n\n\n\n\nAfter the login process, the UI should show the connected user information:\n\nOnce the user is registered, you should be able to preview a page.\n\nImportant\n\nChanging the user's password will invalidate the grant that is established when connecting the user. This will eventually cause an error in the sidekick. In order to prevent this, you need to reconnect the user, by clicking the disconnect button then connect it again.","lastModified":"1753707194","labs":""},{"path":"/developer/web-components","title":"Web Components","image":"/developer/media_1f9f5f5fc53b56eac9e20247e5abdf61381f30a7f.png?width=1200&format=pjpg&optimize=medium","description":"Web Components are a collection of web standards that allow the creation and use of reusable, modular functionality in web sites and web apps. They ...","content":"style\ncontent\n\nWeb Components\n\nWeb Components are a collection of web standards that allow the creation and use of reusable, modular functionality in web sites and web apps. They can be used in Adobe Experience Manager projects.\n\nConcepts and usage\n\nThere are three distinct web standards that make up Web Components. In order of relevance for AEM projects, these are:\n\nCustom elements allow the use of custom behavior and functionality in your existing site\nHTML templates enable the creation of DOM elements that are not displayed in the page\nShadow DOM isolates a branch of the DOM tree from its parent, enabling re-use across a wide range of sites\n\nDepending on how Web Components are written, their code and styles can be largely isolated from the rest of the page, which allows custom elements written by different teams to coexist on the same page, as well as sharing components between independent projects or teams.\n\nThey are often used for design systems in large organizations, but can also provide useful functionality such as dynamic loading of content, data retrieval from third-party sources, client-side content processing, etc.\n\nHow to use Web Components in AEM projects?\n\nTo activate a Web Component, you need to load the JavaScript code that defines it and have your AEM blocks code generate the corresponding custom HTML elements.\n\nThe code loading is similar to adding any existing JavaScript/CSS code to your project, but please read the “performance concerns” section below to avoid surprises.\n\nExample: page publication time\n\nThe following paragraph uses a publication-time AEM block that leverages GitHub’s relative-time Web Component to display the page publication time in a friendly format:\n\nThis page was published\n\nThis would typically be used in the website footer if you want to indicate when each page was last published.\n\nReusing existing code makes total sense to compute friendly relative time strings like “two days ago”, “five hours ago”, so a Web Component is helpful, as our publication-time AEM block only needs to output something like:\n\n<relative-time\n  datetime=\"2024-03-28T17:00:28.000Z\">\n  28.03.2024\n</relative-time>\n\n\nTo let the relative-time Web Component do all the (somewhat) hard work.\n\nFrom a coding standpoint, we only need to make the relative-time component code available and write the pretty simple glue code in the AEM publication-time block.\n\nPerformance concerns, loading and lifecycle\n\nTo keep your Web Performance at the required level, your pages need to be frugal about how much JavaScript and CSS code they load, and if needed control when that loading occurs.\n\nIf your Web Components are written with performance in mind, there’s no reason for them to be less performant than AEM blocks, which are also backed by client-side JavaScript. Unfortunately, this cannot be taken for granted and you should test the performance of any Web Component before using it.\n\nThe AEM blocks code needs to translate the AEM semantic HTML to generate the custom HTML elements, but that’s no different from generating other HTML elements, so shouldn’t significantly affect performance.\n\nIf you want to use components from an existing library, you should make sure to load only JavaScript and CSS code that the current page actually requires, to avoid any extra baggage. Some libraries do that naturally, and others will require jumping through hoops to repackage their components in an optimized way. Your mileage may vary, and as always you’ll need to measure results to verify that the components code does not hamper performance.\n\nYou also need to be aware of the AEM Three-Phase-Loading principle, and if needed explicitly define when your Web Components code and styles are loaded.\n\nOne nice feature of Web Components that helps with lazy loading is that they take care of the asynchronicity and lifecycle concerns. Provided your components use the lifecycle callbacks in the correct way, it does not matter whether their code becomes active before or after the corresponding elements are added to the DOM, the browser does the right thing.","lastModified":"1725864574","labs":"Non mainstream tech: Not many projects are using this yet, but it's perfectly fine if you have good reasons and are careful"},{"path":"/developer/block-party/","title":"Block Party","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"The Block Party is a place for the AEM developer community to showcase what they have built on AEM sites. It also allows others to ...","content":"style\ncontent\n\nBlock Party\n\nThe Block Party is a place for the AEM developer community to showcase what they have built on AEM sites. It also allows others to avoid reinventing the wheel and reuse these blocks / code snippets / integrations built by the community and tweak the code as necessary to fit their own projects.\n\nNote: While we love and support our AEM developer community, Adobe is not responsible for maintaining or updating the code that is showcased in Block Party. Please use the code at your own discretion.\n\nIf you are an AEM Developer and would like to submit your cool block / code snippet or integration, please enter your submission using this form.\n\nhttps://main--helix-website--adobe.hlx.page/developer/block-party/block-party.json?sheet=curated-list-new\n\nPrevious\n\nBlock Collection\n\nUp Next\n\nCustom Headers","lastModified":"1749350591","labs":""},{"path":"/docs/teams","title":"Teams","image":"/docs/media_10908deafae5749c1734ba59aae03be6116eadd42.png?width=1200&format=pjpg&optimize=medium","description":"We create dedicated teams in Microsoft Teams for each AEM customer and invite business users, developers, and authors to it to answer your questions about ...","content":"style\ncontent\n\nTeams\n\nWe create dedicated teams in Microsoft Teams for each AEM customer and invite business users, developers, and authors to it to answer your questions about authoring and development, coordinate your launch or migration, and help with best practices.\n\nThe Adobe team is globally distributed. During US and European business hours you can expect to receive an answer within a few hours. Outside those times, responses may take a bit longer.\n\nRequest to create or join a Microsoft Teams channel via your program in Cloud Manager, or reach out to your Adobe contact. For customers using Slack instead, we can also collaborate with you on Slack.\n\nPrevious\n\nSlack","lastModified":"1765528775","labs":""},{"path":"/docs/sidekick-security","title":"AEM Sidekick Security","image":"/docs/media_12e5a3b1f66b61a0a452f36e9f5101e309dec6a20.jpg?width=1200&format=pjpg&optimize=medium","description":"This page describes security aspects of the Sidekick such as required browser permissions, privacy and network requests being made during operation.","content":"style\ncontent\n\nAEM Sidekick Security\n\nThis page describes security aspects of the Sidekick such as required browser permissions, privacy and network requests being made during operation.\n\nYou can also refer to the following resources for additional information:\n\nThe listing page in Google Chrome Web Store\nThe manifest file on GitHub (open source)\nThe extension’s context menu\nBrowser Permissions\n\nThe Sidekick requires the following browser permissions as defined in its manifest file to function as expected:\n\nPermission\t Justification \n activeTab\t Required to determine whether to show or hide the Sidekick in the active tab \n contextMenus\t Required to simplify adding and removing projects \n declarativeNetRequests\t Required to append a previously stored access token to requests made to the Admin API \n scripting\t Required to load the Sidekick in a relevant browser tab \n storage\t\n\nRequired to persist the following:\n\nstate settings (local storage)\nproject configurations (synchronized across devices)\naccess tokens (session storage)\n\n host_permissions\t\n\nRequired hosts:\n\nhttp://localhost:3000/* – Used by developers during local development. See aem CLI for more information.\nhttps://*/* – Used to determine whether to show or hide the Sidekick based on a tab’s URL.\n\n externally_connectible\t\n\nids\n\nUsed for communication with and import projects from the Legacy Sidekick:\n\nccfggkjabjahcjoljmgmklhpaccedipo – The ID of the legacy Sidekick.\nolciodlegapfmemadcjicljalmfmlehb – Used by Adobe during local development.\nahahnfffoakmmahloojpkmlkjjffnial – Used by Adobe during local development.\n\nmatches\n\nUsed to allow customer sites to interact with the Sidekick UI, e.g. for resizing or closing custom popovers. See Sidekick Development for more information.\n\nhttp://localhost:3000/* – Used by Adobe during local development.\nhttps://*/* – Allows receiving messages from any site using HTTPS. Write access is restricted to trusted origins via code.\nTrusted Origins\n\nThe following Adobe-owned origins are allowed to communicate with the Sidekick extension on behalf of the user to add, remove and and sign into sites:\n\nhttps://admin.hlx.page – The current endpoint of the AEM Admin API\nhttps://api.aem.live – The new endpoint of the AEM Admin API\nhttps://tools.aem.live – Tools to help administrators manage AEM sites\nPrivacy\n\nThe Sidekick collects user activity allowing Adobe to:\n\nLearn how users interact with the UI\nEnhance the user experience in future releases\n\nAll data collected is:\n\nMinimal: names of actions users click in the user interface and target URLs.\nSampled: only every 10th interaction triggers data collection.\nAnonymous: no PII is being transmitted or stored.\nSecure: Data is transmitted using HTTPS and only authorized Adobe personnel have access to stored data.\n\nAdobe further declares that user data is:\n\nNot being sold to third parties\nNot being used or transferred for purposes that are unrelated to the item's core functionality\nNot being used or transferred to determine creditworthiness or for lending purposes\nNetwork Requests\n\nThe Sidekick performs HTTPS request to the following hosts:\n\nNetwork Request\t Justification \n https://admin.hlx.page/*\t The current endpoint of the AEM Admin API. Used to perform actions like previewing, publishing and signing in. Requests can originate from the service worker as well as the active tab and can include the user’s access token. Methods: GET, POST and DELETE. \n https://api.aem.live/*\t The new endpoint of the AEM Admin API. Used to perform actions like previewing, publishing and signing in. Requests can originate from the service worker as well as the active tab and can include the user’s access token. Methods: GET, POST and DELETE. \n https://rum.hlx.page/*\t The endpoint of Adobe’s RUM (Real Use Monitoring) service. Used to collect anonymous usage data. Requests can originate from the service worker as well as the active tab. Method: POST \n https://*.sharepoint.com/*\t The endpoint of the configured SharePoint instance. Used to retrieve the driveItem if the URL in the active tab matches the configured SharePoint host. Requests originate from the active tab and can include the user’s SharePoint credentials. Method: GET \n https://*--project--example.aem.*/*\t The URLs of your preview and live environments. Used to refresh the browser cache after preview and publish operations. Requests can originate from the service worker as well as the current tab and can include the user’s credentials. Method: GET\nRestricting Access\n\nYou can restrict the Sidekick’s access to certain hosts for all users in your enterprise by defining the runtime_blocked_hosts and runtime_allowed_hosts settings in your enterprise’s Chrome profile. See Google’s documentation on Managing Extensions in Your Enterprise for more information.\n\nExample 1: Allow everything, deny few\n\n{\n  \"igkmdomcgoebiipaifhmpfjhbjccggml\": {\n    \"runtime_blocked_hosts\": [\n      \"https://intranet.example.com/*\",\n      \"https://extranet.example.com/*\"\n    ]\n  }\n}\n\n\nThis would prevent the Sidekick extension from interacting with any URL matching https://intranet.example.com/* or https://extranet.example.com/*.\n\nExample 2: Deny everything, allow few\n\n{\n  \"igkmdomcgoebiipaifhmpfjhbjccggml\": {\n    \"runtime_blocked_hosts\": [\"http*://*/*\"],\n    \"runtime_allowed_hosts\": [\n      \"https://admin.hlx.page/*\",\n      \"https://api.aem.live/*\",\n      \"https://rum.hlx.page/*\",\n      \"http://localhost:3000/*\",\n      \"https://*.sharepoint.com/*\",\n      \"https://*--project--example.aem.*/*\"\n    ]\n  }\n}\t\t\n\n\nThis would prevent the Sidekick extension from interacting with any URL, except the ones matching a pattern defined in runtime_allowed_hosts. This example uses a combination of the host_permissions in the manifest file and the list of URLs from the chapter Network Requests above to ensure maximum functionality and an optimal user experience.\n\nSecurity Audits\n\nThe Sidekick’s entire source code is publicly available and – like all of AEM – subject to regular audits performed by 3rd party security researchers. Reports can be shared with customers and prospects under NDA.\n\nPrevious\n\nUsing Sidekick\n\nUp Next\n\nCustomizing Sidekick","lastModified":"1770807281","labs":""},{"path":"/docs/config-service-setup","title":"Setting up the configuration service","image":"/docs/media_15797f15710852969aba8d27f25800586232b1e1d.png?width=1200&format=pjpg&optimize=medium","description":"The Configuration Service is used to aggregate and deliver configuration for various consumers in the AEM architecture including: Client, Delivery, HTML Pipeline, and Admin Service. ...","content":"Setting up the configuration service\n\nThe Configuration Service is used to aggregate and deliver configuration for various consumers in the AEM architecture including: Client, Delivery, HTML Pipeline, and Admin Service. It features configuration inheritance from organizations and profiles into sites, that makes it easy to manage large collections of sites, and create new sites with minimal overhead.\n\nFor most of the common tasks there are simple user interfaces available on https://tools.aem.live, but since the configuration service is API first it is also ideal for automation. Most of the tutorial below focussed on the details of the service itself, and how to write automation scripts. If you just need to update (or create) configuration for a site, it is likely that https://tools.aem.live is the best option.\n\nConfigurations stored in the client scope can be consumed from your site or the AEM Sidekick and control functionality and behavior for visitors and authors. All other configurations are internal to the AEM infrastructure and cannot be accessed directly.\n\nThe configuration is managed using REST calls to the Admin service (admin.hlx.page). See AEM Admin API for the full API reference.\n\nFor existing AEM projects (coming from hlx.live) projects, the configuration service aggregates a valid configuration based on the various sources (fstab.yaml, .helix/config.xlsx, etc). This ensures backward compatibility, but means that some features are not available in this document mode (the traditional, distributed configuration).\n\nOne of the new features that the configuration service can support is independent code and content definitions per site. A site is a combination of an org name and a site name. This is similar to the GitHub owner and repo tuple, but no longer requires a direct relation to the GitHub repository.\n\nFor example, a site named website of the org acme could use a code repository acme/boilerplate. The preview URL follows the same scheme, but using site and org instead. In this example: https://main--website--acme.aem.page.\n\n.\n\nWith this new mechanism, it is now possible to have multiple sites that use different content, but use the same code repository. This feature is also known as “repoless”.\n\nPrerequisites\n\nThere are some rules and constraints when creating new setups that use the configuration service:\n\nAny aem.live organization needs also to exist as github.com org, and at least one repository needs to be synced using AEM Code Sync. This ensures that the organization namespace is properly claimed by an entity that can also claim an org on github.com. The github.com org can exceptionally be created by Adobe or a trusted implementation partner on the customer’s behalf, but it must have at least one owner from the customer's organization.\nFor projects that want to use multiple sites with the same code repository (repoless), there must be one canonical site for which the org/site matches the GitHub owner/repo. This is required for proper code-config association and CDN push invalidation.\nCreate your Organization\n\nFollow the developer tutorial to create your very first site. As part of this process, an aem.live org with the same name as your github.com org will be created, and the github.com user who added the AEM Code Sync App will be added as admin. Contact an Adobe representative if you need a different admin user.\n\nAdd your Site\n\nIn this example, we create a new site named website under a fictitious acme org.\n\nNote: Before running the example command below:\n\nFamiliarize yourself with the Admin API\nMake sure you are properly authenticated\nReplace the fictitious acme and website in the URL with the names of your own org and site\nReplace code.owner and code.repo with your own GitHub repository\nReplace content.source.url with your own content source URL (adjust content.source type as needed)\ncurl -X PUT https://admin.hlx.page/config/acme/sites/website.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: {your-auth-token}' \\\n  --data '{\n  \"version\": 1,\n  \"code\": {\n    \"owner\": \"acme\",\n    \"repo\": \"boilerplate\"\n  },\n  \"content\": {\n    \"source\": {\n      \"url\": \"https://content.da.live/acme/website/\",\n      \"type\": \"markup\"\n    }\n  }\n}'\n\n\n\nThe resulting site would immediately be available at https://main--website--acme.aem.page. The 404.html would be displayed until content has been previewed.\n\nUpdating your Site\n\nYou can update the site configuration either as a whole document, or only certain sub-structures. Here are some examples:\n\nUpdate Access Control\n\nUse this to assign roles to your users. For example, to add user bob to the configuration admin role:\n\ncurl -X POST https://admin.hlx.page/config/{org}/sites/{site}/access/admin.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: {your-auth-token}' \\\n  --data '{\n  \"role\": {\n    \"config\": [\n      \"bob@acme.org\",\n    ]\n  }\n}'\n\n\nUpdate Code Location\n\nUse this to switch your site to a different codebase (see also repoless):\n\ncurl -X POST https://admin.hlx.page/config/{org}/sites/{site}/code.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: {your-auth-token}' \\\n  --data '{\n\t\"owner\": \"acme\",\n\t\"repo\": \"boilerplate\"\n}'\n\nUpdate Content Location\n\nUse this to switch your site to a different content source (see also repoless):\n\ncurl -X POST https://admin.hlx.page/config/{org}/sites/{site}/content.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: {your-auth-token}' \\\n  --data '{\n    \"source\": {\n      \"url\": \"https://content.da.live/acme/website/\",\n      \"type\": \"markup\"\n    }\n  }'\n\nUpdate Production CDN\n\nUse this to configure the CDN settings:\n\ncurl -X POST https://admin.hlx.page/config/{org}/sites/{site}/cdn/prod.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: {your-auth-token}' \\\n  --data '{\n\t\t\"host\": \"{host}\",\n\t\t\"type\": \"fastly\",\n\t\t\"serviceId\": \"{serviceId}\",\n\t\t\"authToken\": \"{authToken}\"\n}'\n\nUpdate Custom Headers\n\nUse this to set custom HTTP headers:\n\ncurl -X POST https://admin.hlx.page/config/{org}/sites/{site}/headers.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: {your-auth-token}' \\\n  --data '{\n\t\"/**\": [\n      {\n        \"key\": \"access-control-allow-origin\",\n        \"value\": \"*\"\n      }\n    ]\n}'\n\nUpdate Public Custom Configuration\n\nUse this to define public project configuration, for example for easy consumption on the client side:\n\ncurl -X POST https://admin.hlx.page/config/{org}/sites/{site}/public.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: {your-auth-token}' \\\n  --data '{\n\t\"custom\": {\n\t\t\"attribute\": \"value\"\n\t}\n}'\n\n\nCaution: Do not store any secrets in public configuration! It will be accessible to everyone at https://{your-host}/config.json.\n\nUpdate Sidekick Configuration\n\nUse this to customize the Sidekick:\n\ncurl -X POST https://admin.hlx.page/config/{org}/sites/{site}/sidekick.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: {your-auth-token}' \\\n  --data '{\n    \"plugins\": [{\n      \"id\": \"foo\",\n      \"title\": \"Foo\",\n      \"url\": \"https://www.aem.live/\"\n    }]\n  }'\n\nUpdate Indexing Configuration\n\nUse this to add your indexing configuration:\n\ncurl -X POST https://admin.hlx.page/config/{org}/sites/{site}/content/query.yaml \\\n  -H 'content-type: text/yaml' \\\n  -H 'x-auth-token: {your-auth-token}' \\\n  --data @your-query.yaml\n\nUpdate Sitemap Configuration\n\nUse this to add your sitemap configuration:\n\ncurl -X POST https://admin.hlx.page/config/{org}/sites/{site}/content/sitemap.yaml \\\n  -H 'content-type: text/yaml' \\\n  -H 'x-auth-token: {your-auth-token}' \\\n  --data @your-sitemap.yaml\n\nUpdate robots.txt\n\nUse this to customize instructions for robots and crawlers:\n\ncurl -X POST https://admin.hlx.page/config/{org}/sites/{site}/robots.txt \\\n  -H 'content-type: text/plain' \\\n  -H 'x-auth-token: {your-auth-token}' \\\n  --data 'User-agent: *\nAllow: /\nSitemap: https://{your-host}/sitemap.xml'\n\nstyle\ncontent\nRemove unused configuration files\n\nOnce you have enabled the configuration service, the configuration settings there override the settings you have in configuration files in your GitHub repository and your content source, so it is best to remove them:\n\nFirst remove the files from GitHub:\n\nfstab.yaml\nrobots.txt\ntools/sidekick/config.json\nhelix-query.yaml\nhelix-sitemap.yaml\n\nAnd after those are removed, check if the hlx.page preview page returns a 404. This means that the internal configuration of the repository based config is cleaned up. Then you can also unpreview and delete the configuration in /.helix folder in your content:\n\n.helix/config.xlsx\n.helix/headers.xlsx\n\nPrevious\n\nRepoless Sites\n\nNext\n\nConfiguration Blueprints","lastModified":"1771013170","labs":""},{"path":"/developer/upgrade","title":"Upgrading to aem.live from hlx.live","image":"/developer/media_1c4c7dc09ea738d661e244ec26968945fd8473d95.png?width=1200&format=pjpg&optimize=medium","description":"As outlined on our deprecations and end-of-service page, the hlx.live domain has been blocked on December 18 2025.","content":"style\ncontent\n\nUpgrading to aem.live from hlx.live\nEnd-of-service warning\n\nAs outlined on our deprecations and end-of-service page, the hlx.live domain has been blocked on December 18 2025.\n\nIf you are familiar with *.hlx.live for published content and *.hlx.page for previews, please note that we are introducing *.aem.live and *.aem.page as new domains for your sites.\n\nWith this change, we are also introducing several improvements while making sure everything works just as before. The most important benefit is that preview URLs will load faster on aem.page thanks to caching.\n\nIn this guide, we will walk you through the changes you'll need to make to upgrade to aem.live. Many of these steps are only required if you use specific features, so you may not need to complete all steps. Rest assured, the process is straightforward, and we are here to help you every step of the way.\n\nTry your site on aem.live\n\nGo to your site on hlx.live and replace hlx.live in the URL with aem.live. For instance, if you used to see your site on https://main--helix-website--adobe.hlx.live/ then your new URL would be https://main--helix-website--adobe.aem.live/. No more hlx.live, instead it's aem.live now.\n\nTry previews on aem.page\n\nIf your site works on aem.live, then previews will also work on aem.page. For instance, if you used to see your site on https://main--helix-website--adobe.hlx.page/ then your new URL would be https://main--helix-website--adobe.aem.page/.\n\nTell Sidekick to use aem.live and aem.page\n\nEven though both hlx.page and aem.page work for previews now, on older projects Sidekick defaults to hlx.page.\n\nIn the configuration for your site https://www.aem.live/docs/admin.html#schema/SiteConfig you can find the cdn.live.host and cdn.preview.host and set them to the following values, replacing <owner with your GitHub organization and <repo> with your repository name:\n\ncdn.live.host: main--<repo>--<owner>.aem.live\ncdn.preview.host: main--<repo>--<owner>.aem.page\n\n\n\nIf you are in a legacy setup you can find the .helix/config spreadsheet (or equivalent configuration file in other authoring environments) and set following values, Preview the spreadsheet using Sidekick to activate the new configuration.\n\nAfter activating the configuration, please make sure the Sidekick preview action is opening urls with .aem.page.\n\nPoint your CDN to aem.live\n\nIn your CDN setup, change the origin URL or host name from hlx.live to aem.live and activate the change. If you experience any issues, then revert the origin URL or host name back to hlx.live and contact Adobe for assistance.\n\nWe recommend to make the change in a lower environment such as stage before performing it in production environment\n\nThat's it, you made it! Your site is now on aem.live. For 95% of all sites, this is all you have to do. Keep reading for the remaining 5% of special cases.\n\nIf you need help during the upgrade or ran into issues, please join our Discord channel or get in touch with us to discuss further.\n\nAdvanced Upgrade Scenarios\nIf you are using BYO DNS without BYO CDN\n\nIf you use custom domains without a custom CDN, please contact Adobe to perform the origin change to aem.live.\n\nIf you are using hlx.page or hlx.live urls in your code\n\nCheck your GitHub repository for any references to hlx.page or hlx.live. If you find them, double-check the functionality, as the changed hostname may introduce CORS or other cross-origin issues. Update the references to aem.page or aem.live.\n\nCode references to hlx.page and hlx.live in the sampleRUM() function, lib-franklin.js or aem.js do not need to be changed.\n\nIf you are using hlx.page or hlx.live in your content\n\nCheck your content repository for any instances where you might have references for hlx.page or hlx.live in any content areas, including spreadsheets, for project-specific functionality.\n\nUpdate link references around project documentation areas that may include, but not limited to, Block Library, Runbooks, Authoring Guides, and Pull Request Templates.\n\nWe don't need to update in-content or absolute links/hrefs pointing to hlx.page or hlx.live in the Word documents, as they will be auto-transformed to aem.page and aem.live.\n\nIf you use the deprecated forms endpoint to receive form posts in Excel or Google Sheets\n\nThe forms service on hlx.live has been deprecated and will not be supported on aem.live. Please contact Adobe to discuss possible alternatives for this service.\n\nIf you have configured external services to use hlx.page or hlx.live\n\nMany services like Google reCaptcha, Google Maps, Cloudflare Turnstile, etc. use the site domain or referrer domain to validate API tokens. If you have configured these services to allow your site on hlx.page or hlx.live, make sure to also allow aem.page and aem.live\n\nIf you want to update aem.js, too\n\nUpgrading to aem.live does not require code changes. Some teams use the opportunity to also update to the latest version of aem.js. It's best to treat this as a separate upgrade and (like all code changes) test it on a separate branch.\n\nPrevious\n\nBuild\n\nUp Next\n\nAnatomy of an AEM Project","lastModified":"1768206415","labs":""},{"path":"/docs/deprecation","title":"Deprecations and Removals","image":"/docs/media_170b9656d143174267db14f90acbc6a6e9f5c6484.png?width=1200&format=pjpg&optimize=medium","description":"Features in AEM that do not have enough use anymore, or have been replaced with better, more reliable solutions will be removed. Before these features ...","content":"Deprecations and Removals\n\nFeatures in AEM that do not have enough use anymore, or have been replaced with better, more reliable solutions will be removed. Before these features are removed, they are declared deprecated and we provide suggestions for possible replacements.\n\nIn the next step, Adobe posts an end-of-service date on this page. After the end-of-service date, the respective service or feature will no longer be available.\n\nstyle\ncontent\nDeprecated\n\nCustom domains without a custom CDN\n\nDeprecation notice\n\nWe no longer accept new domains for the Bring-your-own-DNS service. Please use Adobe Managed CDN.\n\nFolder Mapping\n\nDeprecation notice\n\nPlease contact us if you have a use case for folder mapping. We will help to find the best solutions. Existing projects using folder mapping may need to migrate to a different solution in the future.\n\nScheduling\n\nDeprecation notice\n\nReach out to us if you have a use case for scheduling, we will help to find the best solution. Existing projects using scheduling may need to migrate to a different solution in the future.\n\nRemoved\n\n*.hlx.live and *.hlx.page\n\nEnd-of-service\n\nThe *.hlx.* domains have been retired in favor of *.aem.* on December 18, 2025. Click above for upgrade instructions.\n\nNote: admin.hlx.page is excluded from this retirement and will remain operational.\n\nCloudflare Setup (config only, no worker) for hlx.live\n\nNo longer supported\n\nThis setup is no longer supported. Please use the cloudflare worker setup instead.\n\nManual Forms Sheet Setup\n\nNo longer supported\n\nThis feature is retired and has been replaced by Edge Delivery Services for AEM Forms.\n\nIDP-Based Site Authentication\n\nNo longer supported\n\nIDP-based site authentication has been retired and replaced with token-based authentication.\n\nAEM Sidekick v6\n\nEnd-of-service\n\nThis version of the sidekick was retired on February 2, 2026. Please use the latest sidekick version instead.\n\nFile-Based Configuration\n\nNo longer supported\n\nTransition your configuration to the configuration service with advanced features and improved inheritance management facilities.","lastModified":"1770360868","labs":""},{"path":"/developer/da-tutorial","title":"Getting Started – Document Authoring (DA) Developer Tutorial","image":"/developer/media_1d00989ba18e942fbddc9bb108add01e153029f22.png?width=1200&format=pjpg&optimize=medium","description":"This tutorial will get you up-and-running with a new Adobe Experience Manager (AEM) project. In ten to twenty minutes, you will have created your own ...","content":"style\ncontent\n\nGetting Started – Document Authoring (DA) Developer Tutorial\n\nThis tutorial will get you up-and-running with a new Adobe Experience Manager (AEM) project. In ten to twenty minutes, you will have created your own site and be able to create, preview, and publish your own content, styling, and add new blocks.\n\nPrerequisites:\n\nYou have a GitHub account, and understand Git basics.\nYou understand the basics of HTML, CSS, and JavaScript.\nYou have node / npm installed for local development.\n\nThis tutorial uses macOS, Chrome, and Visual Studio Code as the development environment and the screenshots and instructions reflect that setup. You can use a different operating system, browser, and code editor, but the UI you see and steps you must take may vary accordingly.\n\nWhat makes Document Authoring different\n\nDocument Authoring (DA) is an alternative to SharePoint or Google Drive that provides a document-based authoring interface focused on the AEM Document model (Blocks, Sections, etc.). It provides an SDK, APIs, and built-in Adobe technologies.\n\nGetting started\n\nThe fastest and easiest way to get started following AEM best practices is to create your repository using the AEM Block Collection GitHub repository as a template. You can find it here: https://github.com/aemsites/da-block-collection\n\n\nClick the Use this template button and select Create a new repository, and select the user or org you would like to own this repository.\n\nWe recommend that the repository is set to public.\n\nThe only remaining step in GitHub is to install the AEM Code Sync GitHub App on your repository by visiting this link: https://github.com/apps/aem-code-sync/installations/new. This app will sync code changes to AEM.\n\n\nIn the Repository access settings of the AEM Code Sync App, make sure you select Only select Repositories (not All Repositories). Then select your newly created repository, and click Save.\n\nNote: If you are using Github Enterprise with IP filtering, you can add the following IP to the allow list: 3.227.118.73\n\nCongratulations! You have a new website running on https://<branch>--<repo>--<owner>.aem.page/ In the example above that’s https://main--da-tutorial--da-sites.aem.page/\n\nLink your code and content\n\nCopy your GitHub repo URL. In our example, this would be: https://github.com/da-sites/da-tutorial\n\nPaste this URL into the https://da.live/start page and click Go.\n\nYou will be presented with a code snippet to place in your project. Click Copy to add the code to your clipboard. You will then be walked through the process of adding the snippet to your fstab.yaml.\n\nNote: Firefox users may have to select the snippet and copy manually.\n\nClicking Copy will present an Open button. This button will take you directly to your fstab.yaml to paste your snippet.\n\nUpon landing back in GitHub, paste the code snippet and commit the changes. You can close the GitHub tab when you are finished committing your changes.\n\nAfter you have committed your changes, switch back to your https://da.live/start tab and click Done.\n\nYou will be presented with the option to create demo content. This is recommended for this tutorial.\n\nBrowse your content\n\nAfter your demo content has been copied, you will be taken to DA’s browse view. Here you can:\n\nBrowse your content\nSearch & replace content\nCopy, rename, move, and delete content\nCreate new docs, sheets, and media\nAccess configurations for both DA and AEM.\nDrag and drop supported files from your computer or mobile device.\n\nCheck out the demo page by clicking on it.\n\nEdit and preview your content\n\nOne of the unique features of DA is the ability to get a live preview of your document. Click the preview tab to expand the live preview pane.\n\nOnce the live preview pane is open, change the document. Below, we have changed the congrats text. You will see your changes reflected in the preview pane on the right.\n\nPreview your content from DA\n\nIn addition to the AEM Sidekick, DA provides the ability to preview and publish your content. Select the paper airplane in the top right of the page and click preview.\n\nYour page will open in a new tab with your changes. You are now looking at a staged, or preview, version of your page.\n\nPublish your content using AEM Sidekick\n\nThe next step is to publish the page using AEM Sidekick. If you have not already done so, install the AEM Sidekick Chrome extension.\n\nAfter adding the extension to Chrome, don’t forget to pin it, this will make it easier to find it.\n\nNavigate back to your previewed page and toggle the Sidekick extension to see Sidekick at the bottom of your page. Click the Publish button to push your page live.\n\nStart developing styling and functionality\nhttps://main--helix-website--adobe.aem.page/developer/videos/tutorial-step4.mp4\n\nTo get started with development, it is easiest to install the AEM Command Line Interface (CLI) and clone your repo locally through using the following.\n\nnpm install -g @adobe/aem-cli\ngit clone https://github.com/<owner>/<repo>\n\n\nFrom there change into your project folder and start your local development environment using the following.\n\ncd <repo>\naem up\n\n\n\n\nThis opens http://localhost:3000/ and you are ready to make changes.\nA good place to start is in the blocks folder which is where most of the styling and code lives for a project. Simply make a change in a .css or .js and you should see the changes in your browser immediately.\n\nOnce you are are ready to push your changes, simply use Git to add, commit, and push and your code to your preview (https://<branch>--<repo>--<owner>.aem.page/) and production (https://<branch>--<repo>--<owner>.aem.live/) sites.\n\nThat’s it, you made it! Congrats, your first site is up and running. If you need help in the tutorial, please join our Discord channel or get in touch with us.\n\nPrevious\n\nBuild\n\nUp Next\n\nAnatomy of an AEM Project","lastModified":"1740760211","labs":"Document Authoring"},{"path":"/docs/security","title":"Security Overview","image":"/docs/media_101ca3ee9a7071b2368532d2b6c3b76386e2b2480.png?width=1200&format=pjpg&optimize=medium","description":"Adobe Experience Manager Security overview for Software Architects","content":"style\ncontent\n\nSecurity Overview\n\nThis security guide covers Edge Delivery Services in Adobe Experience Manager Sites as a Cloud Service, the Admin API for Edge Delivery Services, the Sharepoint integration, and the developer tooling for Edge Delivery Services. There is a separate security guide for the AEM Sidekick. Familiarity with the overall architecture is recommended and assumed for the rest of this guide.\n\nOverall Considerations\nTenant Isolation\n\nAll services that are a part of aem.live are multi-tenant. Tenant isolation is built into the publish and delivery services as well as the Content Hub to help ensure required content and data protection.\n\nWrite operations (preview or publish) require project details (GitHub owner, repository, branch and, if configured, an access token) and content paths be included in requests. This is required to inform the Admin Service which source document to fetch and where to store the processed content in the Content Hub. Similarly, a content request coming from a customer’s CDN must include the project details and path in its URL structure so the delivery service knows which content to deliver.\n\nData Encryption\n\nAll data in transit is exchanged over secure, encrypted connections using Transport Layer Security (TLS) 1.2 or greater. All data at rest is encrypted using AES256, with keys managed by two (2) independent cloud service providers.\n\nVulnerability Management\n\nWe adhere to Adobe's Secure Product Lifecycle (SPLC) to ensure swift and accurate assessment and mitigation of security vulnerabilities according to their threat rating. Automated dependency management helps us keep our code base safe and quickly update vulnerable dependencies to their mitigated versions, while taking necessary precautions to prevent supply chain attacks.\n\nFor more details, see The Adobe Incident Response Program.\n\nPreview and Delivery\nRequest Filtering\n\naem.live applies strict path filtering on the edge for any content it delivers to help reduce potential attack surface. This path filtering is functionally equivalent to a web application firewall (WAF) and prevents thousands of attacks every minute.\n\nOther than generic web application firewalls that are based on deny-lists of known security exploits, AEM uses a strict list of permissible patterns to only permit legitimate traffic.\n\nThe deep integration of the edge layer and the underlying services that make up delivery and preview ensures maximum security and high performance without the overhead of a dedicated web application firewall.\n\nRate and Volume Limiting\n\nAll requests and usage of the preview and publish services are subject to rate and volume limits that are applied on a project-by-project basis and continuously monitored. This prevents denial-of-service attacks (DoS), in the form of distributed denial of service (DDoS) attacks, and self-inflicted Denial of Service attacks through misconfigured monitoring, bots, and crawlers. The vast majority of attacks prevented fall into the latter category..\n\nSecure Network Routing\n\nEdge Delivery services enforces TLS and HTTP Strict Transport Security (HSTS) to help ensure that every request is effectively secured.\n\nSite Authentication\n\nSite authentication, once enabled, ensures that only authorized requests can be made to the preview and publish tiers of an AEM site. Requests are required to present one of a list of configured site tokens to be permitted. Users can issue and revoke tokens through the Admin API.\n\nAuthors can use the AEM Sidekick to access protected sites using transient site tokens.\n\nContent Security Policy (CSP)\n\nAEM Edge Delivery Services includes a Content Security Policy (CSP) by default to help protect sites against cross-site scripting (XSS) and other common web-based attacks. The CSP is automatically applied to all new sites built from the boilerplate and cannot be customized. Learn more about the policy here.\n\nAdmin API\nAuthentication\n\nThe Admin API strictly requires authentication for all administrative operations and can be set up to require authentication for content operations like previewing or publishing through the sidekick. This requirement extends to the use of the AEM Sidekick.\n\nAuthentication is delegated to the identity provider (IdP) that backs the content source of the site, such as Microsoft or Google authentication.\n\nRoles and Permissions\n\nUsers can be assigned different roles based on the tasks they need to perform. The mapping is done in the site configuration. The following roles are built-in examples:\n\nadmin.role.author: This role allows a user to update content in the Preview environment\nadmin.role.publish: This role allows a user to update content in the Preview and Live environments\n\nSee Admin Permissions and Admin Roles for more detailed information.\n\nRate and Volume Limiting\n\nAll requests and usage of the Admin API are subject to rate and volume limits that ensure smooth operation of the service. In addition to the published limits, Adobe can apply secondary limits on a case-by-case basis.\n\nBackend Integrations\nSharepoint\n\nPlease see Sharepoint integration (application) or Sharepoint integration (delegated), depending on your setup.\n\nGitHub\n\nThe AEM Code Sync GitHub application uses GitHub permissions to provide access to your GitHub repository, so that code can be made available for delivery in the Code Bus. Following permissions are requested:\n\nread access to metadata – so that code gets assigned to the correct Code Bus space for your repository\nread access to email addresses – so that users can automatically get assigned to the organization and site in the configuration service when a new repository is added\nread/write access to checks – so that the pull request status can be set when the Pagespeed Insights performance test passes\nread/write access to content – so that content can be replicated to the Code Bus. Write access is required to send repository_dispatch events and trigger GitHub actions\nread/write access to deployments – so that each branch in your GitHub repository can correspond to a deployment on aum.live\nread/write access to issues – so that Adobe can raise issues with your code when detected\nread/write access to pull requests – so that Adobe can suggest code updates, if required\n\nThe Code Sync GitHub app will not perform writes to your repository, but raise PRs, so that your approval will be given for each code change.\n\nBackends with IP Filtering\n\nIf your backend only allows connections from a specific list of IPs, add 3.227.118.73 to ensure AEM is able to connect to it.\n\nAuthor Tooling\nAEM Sidekick\n\nThe AEM Sidekick is a browser extension installed via Chrome Web Store or Apple App Store and helps authors preview and publish their content. See Sidekick Security for more detailed information.\n\nDeveloper Tooling\nAEM Command Line Application\n\nThe AEM Command Line Application is installed via npm and requires access to the developer's file system, so that the site under development can be previewed using code from the developer's working copy.\n\nIt also requires network access to *.aem.live and *.aem.page, and validates all requests using Transport Layer Security (TLS) against the node.js certificate store. When man-in-the-middle attacks or tampering with request routing are detected, the command line application refuses to serve a preview site.\n\nCertifications\n\nTo get an up-to-date overview of certifications applicable to Adobe Experience Manager as a Cloud Service (which includes Edge Delivery Services), including\n\nISO 9001\nISO 27017:2015\nISO 20018:2019\nCSA Star Level 2\nIRAP\nCSA CAIQ\nSOC3\nSOC2\nSOC2+HIPAA\n\nPlease see Adobe Trust Center, specifically for solution Experience Cloud and product Adobe Experience Manager Cloud Service. Additional resources can be found at Adobe Compliance Certifications, Standards, and Regulations, and the Adobe Common Controls Framework (CCF).\n\nTrust, but verify\n\nIf you'd like to verify our security claims, as a customer, you are allowed to perform penetration tests against our services, even without advance notice. We ask you to stick to the following rules:\n\nPerform load tests incl. simulated (D)DoS attacks only against production infrastructure, which includes your CDN\nIf you find a vulnerability, disclose it responsibly to psirt@adobe.com and we'll get back to you\n\nPrevious\n\nStaging & Environments\n\nUp Next\n\nOperations","lastModified":"1770901864","labs":""},{"path":"/docs/scheduling","title":"Scheduling","image":"/docs/media_1c7b0ed48ddf9d19abb4a4d6e5f5494a878ce54b7.png?width=1200&format=pjpg&optimize=medium","description":"AEM offers a way to execute certain tasks, such as previewing, publishing or purging pages at certain times during the day or in certain periodic ...","content":"Scheduling\n\nAEM offers a way to execute certain tasks, such as previewing, publishing or purging pages at certain times during the day or in certain periodic intervals. It also allows publishing a query index, which makes entries public in the JSON representation that may have only been available in the underlying Excel workbook or Google spreadsheet.\n\nThe tasks to execute and the time they should be executed at are configurable with a crontab sheet, located in the project’s .helix folder, that looks as follows:\n\n\n\nThe sheet should have two columns named when and command, containing time and command to execute.\n\nSpecifying time\n\nThe scheduled time is specified as a text expression and times are expressed in UTC. Some examples follow:\n\nat 7:00 am\nRuns at 07:00 UTC every day.\n\nevery 60 minutes starting on the 55th min\nRuns every hour on the 55th minute\n\nat 3:00 pm on the 18th day of May in 2024\nRuns exactly once at 15:00 UTC on 18 May 2024\n\nNote the year in the last example. If this is omitted, you schedule a job that runs again on the same date next year:\n\nat 3:00 pm on the 18th day of May\nRuns every year at 15:00 UTC on 18 May\n\nAlso note that the background scheduler only runs every 5 minutes, at :00, :05, :10 and so on, which means that scheduling a job to be executed before the next scheduler run will not work. For example, if it’s 3:01 pm, and you schedule a job to be executed at 3:04 pm, the next scheduler run at 3:05 pm will consider 3:04 pm a time in the past and therefore not execute that job.\n\nFor more information on supported time expressions, see later’s documentation page.\n\nAvailable tasks\n\nThe following table summarizes the tasks available:\n\nName\t Description preview\t Preview a page. \n publish\t Publish a page, including flushing the cache and indexing the page. \n http\t Execute an HTTP request, which allows purging pages on a CDN that are not automatically purged when a page is published \n publish-index\t Publish an index. \n publish-snapshot\t Publish a snapshot \n process\t Process another sheet containing pages to preview, publish and purge. \n unpublish\t Unpublish a page\n\nThe complete reference to those tasks can be found below.\n\nActivate your schedule configuration\n\nWhen you are finished making changes to your crontab sheet, activate it by previewing it. If there are any syntactical errors, a message will be displayed, and none of your tasks in that sheet will be scheduled.\n\nRun multiple tasks at the same time\n\nYou can run multiple tasks at the same time by separating them with a line feed in the cell. If you happen to have a lot of pages to preview, publish and possibly purge at the same time, consider using process as shown below.\n\nTask reference\npreview\n\nPreview a page given as second parameter, e.g.\npreview /document\npreview /spreadsheet.json\n\npublish\n\nPublish a page given as second parameter, e.g.\npublish /document\npublish /spreadsheet.json\n\nhttp\n\nExecute a HTTP request to the given URL, with an optional method, e.g.\nhttp https://www.example.com\nhttp GET https://www.example.com\n\nboth issue a GET request to https://www.example.com\n\npublish-index\n\nPublish an index and additionally rebuild the sitemap if there were changes, e.g.\npublish-index default publishes the index named default in helix-query.yaml.\n\npublish-snaphot\n\nPublish a snapshot given as second parameter, e.g.\npublish-snapshot 1234\n\nprocess\n\nProcess a previewed JSON document line by line, interpreting it as a list of commands and their arguments, where the columns are commands and the cell values are arguments.\n\nFor example, given the following JSON document called /tasks.json:\n\nAdding a command process /tasks.json to your crontab will preview and publish /document1, and then preview and publish /document2.\n\nunpublish\n\nUnpublish a page given as second parameter, e.g.\nunpublish /document\nunpublish /spreadsheet.json\n\nstyle\ncontent","lastModified":"1754572156","labs":""},{"path":"/developer/sidekick-development","title":"Extending the Sidekick","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"The goal of this document is to explain how developers can interact with the sidekick, and how it can be customized at a project level.","content":"style\ncontent\nExtending the Sidekick\n\nThe goal of this document is to explain how developers can interact with the sidekick, and how it can be customized at a project level.\n\nEvents\n\nThe sidekick emits the following events:\n\nEvent Name\t Target Element\t Payload\t Description \n sidekick-ready\t document\t -\t The sidekick element has been added to the DOM and is ready for use. \n previewed\t aem-sidekick\t (string) The path(s) of the previewed resource\t The Preview button has been clicked. \n updated\t aem-sidekick\t (string) The path of the updated resource\t The Reload button has been clicked. \n published\t aem-sidekick\t (string) The path of the published resource\t The Publish button has been clicked. \n deleted\t aem-sidekick\t (string) The path of the deleted resource\t The Delete button has been clicked. \n unpublished\t aem-sidekick\t (string) The path of the unpublished resource\t The Unpublish button has been clicked. \n env-switched\t aem-sidekick\t\n\n(object) An object with the following properties:\n\n(string) sourceUrl: The source URL\n(string) targetUrl: The target URL\n\t The user has switched the environment. \n plugin-used\t aem-sidekick\t (string) The plugin ID\t A plugin button has been clicked \n logged-in\t aem-sidekick\t (object) The profile object\t The user has signed in. \n logged-out\t aem-sidekick\t (object) The profile object\t The user has signed out. \n status-fetched\t aem-sidekick\t (object) The status object\t The status for a resource has been fetched from the Admin API \n custom:<name>\t aem-sidekick\t\n\n(object) An object with the following properties:\n\n(object) config: The sidekick configuration\n(object) location: The location object\n(object) status: The status object\n\t A custom event-based plugin button has been clicked.\n\nAn event’s payload can be found in the event.detail property.\n\nListening for Events\n\nIn your project code (e.g. in /scripts/scripts.js), you can react to sidekick events as follows (replace foo with the name of the event you want to listen for):\n\nconst doFoo = ({ detail: payload }) => {\n  console.log('something happened', payload);\n  // your custom code goes here\n};\n\nconst sk = document.querySelector('aem-sidekick');\nif (sk) {\n  // sidekick already loaded\n  sk.addEventListener('foo', doFoo);\n} else {\n  // wait for sidekick to be loaded\n  document.addEventListener('sidekick-ready', () => {\n    // sidekick now loaded\n    document.querySelector('aem-sidekick')\n      .addEventListener('foo', doFoo);\n  }, { once: true });\n}\n\nCustomizing the Sidekick\n\nYou can customize the sidekick for your project by adding a sidekick object to your site configuration. See here for an example of how to use the configuration service.\n\n{\n  \"project\": \"My project\",\n  \"plugins\": []\n}\n\n\nFor all available configuration options, see the config schema. Here are some basics to get you started:\n\nHost Settings\n\nNote: The following host properties are optional in the sidekick configuration and should only be used to explicitly override the CDN settings in your site configuration:\n\nhost (string) The host name of the production website (overrides cdn.prod.host)\npreviewHost (string) The host name of the preview environment (overrides cdn.preview.host, defaults to *.aem.page)\nliveHost (string) The host name of the live environment (overrides cdn.live.host, defaults to *.aem.live)\nreviewHost (string) The host name of the review environment (overrides cdn.review.host, defaults to *.aem.reviews)\ntrustedHosts (string[]) Additional host names trusted to use sidekick authentication (use with caution!)\nCustom Plugins\n\nPlugins allow you to add custom functionality to the sidekick, enhancing your users’ experience.\n\nCommon Plugin Properties\n\nThe following properties are applicable to all plugin types:\n\nid (string) is mandatory and must be unique within a sidekick configuration.\ntitle (string) will be shown on the plugin button.\ntitleI18n (object<string, string>) defines optionally translated titles.\nSupported languages: en, de, es, fr, it, ja, ko, pt_BR, zh_CN, and zh_TW.\npinned (boolean) determines if the plugin is pinned to the toolbar (default) or folded into the … menu.\nenvironments (string[]) specifies where the plugin should appear:\nany (default) - any environment\ndev - the local development URL (e.g. http://localhost:3000/)\nedit - an editor view (e.g. Word, Excel, Google Docs)\nadmin - a folder view (e.g. SharePoint, Google Drive)\npreview - a preview URL (e.g. https://main--site--org.aem.page/)\nlive - a live URL (e.g. https://main--site--org.aem.live/)\nprod - a preview URL (e.g. https://www.example.com/)\nexcludePaths (string[]) defines patterns to exclude the plugin based on the path in the current tab’s URL.\nincludePaths (string[]) defines patterns to include the plugin based on the path in the current tab’s URL.\nURL-based Plugins\n\nYou can specify a url that will be opened in a new tab when the plugin button is clicked:\n\n{\n  \"plugins\": [\n    {\n      \"id\": \"foo\",\n      \"title\": \"Foo\",\n      \"url\": \"/tools/sidekick/foo.html\"\n    }\n  ]\n}\n\n\nThe following properties are specific to URL-based plugins:\n\npassConfig (boolean) pass the project configuration via query parameters\npassReferrer (boolean) pass the originating URL via query parameters\nEvent-based Plugins\n\nAlternatively, you can specify the name of an event to be fired when the plugin button is clicked. This allows the execution of custom JavaScript code in the context of your page by listening for the event on the sidekick element. Custom events will have a custom: prefix. For your convenience, the custom event dispatched contains a copy of the current sidekick state.\n\nNote: Event-based plugins can only be used in the following environments: Development, Preview, Live and Production. Executing custom code is not possible in Edit or Admin.\n\n{\n  \"plugins\": [\n    {\n      \"id\": \"foo\",\n      \"title\": \"Foo\",\n      \"event\": \"foo\"\n    }\n  ]\n}\n\n\nIn your project code (e.g. in /scripts/scripts.js), you can react to the event as follows:\n\nconst doFoo = ({ detail: payload }) => {\n  console.log('a custom event happened', payload);\n  // your custom code goes here\n};\n\nconst sk = document.querySelector('aem-sidekick');\nif (sk) {\n  // sidekick already loaded\n    sk.addEventListener('custom:foo', doFoo);\n} else {\n  // wait for sidekick to be loaded\n  document.addEventListener('sidekick-ready', () => {\n    // sidekick now loaded\n    document.querySelector('aem-sidekick')\n      .addEventListener('custom:foo', doFoo);\n  }, { once: true });\n}\n\nSpecial Plugin Types\nPalette Plugins\n\nPalettes are variants of URL-based plugins which load the configured URL inside a floating palette instead of opening a new tab.\n\nisPalette (boolean) opens the target of a URL-based plugin in a palette instead of a new tab.\npaletteRect (string) optionally defines the size and position of the palette in the format of a DOMRect.\n\nThe following example creates a standard palette placed at the bottom left of the window:\n\n{\n  \"plugins\": [\n    {\n      \"id\": \"foo\",\n      \"title\": \"Foo\",\n      \"url\": \"/tools/sidekick/foo-palette.html\",\n      \"isPalette\": true\n    }\n  ]\n}\n\n\nIf you wish to change the size and positioning of your palette, use paletteRect:\n\n{\n  \"plugins\": [\n    {\n      \"id\": \"foo\",\n      \"title\": \"Foo\",\n      \"url\": \"/tools/sidekick/foo-palette.html\",\n      \"isPalette\": true,\n      \"paletteRect\": \"top:150px;left:7%;height:675px;width:85vw;\"\n    }\n  ]\n}\n\nManipulating a palette from within\n\nUsing Chrome's messaging API, you can tell the sidekick to close your palette, for example when the user clicks a button inside of it. The id property of the message object is the ID of your palette plugin:\n\nchrome.runtime.sendMessage('igkmdomcgoebiipaifhmpfjhbjccggml', {\n  id: 'foo',\n  action: 'closePalette',\n});\n\n\nYou can also resize and reposition your palette dynamically. The id property of the message object is the ID of your palette plugin. The rect object can contain CSS properties and values for top, right, bottom, left, width and height.\n\nchrome.runtime.sendMessage('igkmdomcgoebiipaifhmpfjhbjccggml', {\n  id: 'foo',\n  action: 'resizePalette',\n  rect: { \"top\": \"100px\", \"left\": \"20px\", \"width\": \"50vw\", \"height\": \"500px\" }\n});\n\nPopover Plugins\n\nPopovers are variants of URL-based plugins which load the configured URL inside a popover instead of opening a new tab. Popovers are centered above the plugin's button.\n\nisPopover (boolean) opens the target of a URL-based plugin in a popover instead of a new tab.\npopoverRect (string) optionally defines the width and height of the popover in the format of a DOMRect.\n{\n  \"plugins\": [\n    {\n      \"id\": \"foo\",\n      \"title\": \"Foo\",\n      \"url\": \"/tools/sidekick/foo-popover.html\",\n      \"isPopover\": true\n    }\n  ]\n}\n\n\n\nIf you wish to change the width or height of your palette, use popoverRect:\n\n{\n  \"plugins\": [\n    {\n      \"id\": \"foo\",\n      \"title\": \"Foo\",\n      \"url\": \"/tools/sidekick/foo-popover.html\",\n      \"isPopover\": true,\n      \"popoverRect\": \"width:300px;height:200px;\"\n    }\n  ]\n}\n\n\nA theme query parameter is appended to the URL to style the iframe in line with the current sidekick theme. If no background color is set on the content’s body, it will inherit the popover’s translucent background.\n\nManipulating a popover from within\n\nUsing Chrome's messaging API, you can tell the sidekick to close your popover, for example when the user clicks a button inside of it. The id property of the message object is the ID of your popover plugin:\n\nchrome.runtime.sendMessage('igkmdomcgoebiipaifhmpfjhbjccggml', {\n  action: 'closePopover',\n  id: 'foo',\n});\n\n\nYou can also resize your popover dynamically. The id property of the message object is the ID of your popover plugin. The rect object can contain CSS properties and values for width and height.\n\nchrome.runtime.sendMessage('igkmdomcgoebiipaifhmpfjhbjccggml', {\n  id: 'foo',\n  action: 'resizePopover',\n  rect: { \"width\": \"50vw\", \"height\": \"500px\" }\n});\n\nContainer Plugins\n\nContainers allow you grouping plugins together and help save space in the toolbar. Clicking a container plugin simply toggles its dropdown, it can’t have its own URL or event action.\n\nisContainer (boolean) renders a plugin as a dropdown instead of a button.\ncontainerId (string) adds a plugin to a container plugin with the specified ID.\n\nThe following example creates a container named “Tools” and places a plugin “Foo” in it:\n\n{\n  \"plugins\": [\n    {\n      \"id\": \"tools\",\n      \"title\": \"Tools\",\n      \"isContainer\": true\n    },\n    {\n      \"id\": \"foo\",\n      \"containerId\": \"tools\",\n      \"title\": \"Foo\",\n      \"event\": \"foo\"\n    }\n  ]\n}\n\nBadge Plugins\n\nBadges allow you to add labels to the sidekick under certain conditions. They will be rendered on the right hand side of the toolbar. Badges have a merley decorative purpose and can’t be clicked.\n\nisBadge (boolean) renders a plugin as a badge instead of a button.\nbadgeVariant (string) optionally determines the badge’s color scheme (gray, red, orange, yellow, chartreuse, celery, green, seafoam, cyan, blue, indigo, purple, fuchsia, or magenta)\n\nThe following example adds a “Stage” badge to the sidekick in the preview environment:\n\n{\n  \"plugins\": [\n    {\n      \"id\": \"stage\",\n      \"title\": \"Stage\",\n      \"isBadge\": true,\n      \"environments\" [\"preview\"]\n    }\n  ]\n}\n\nCustomizing Default Plugins\n\nBy specifying an id of a default plugin (such as preview, update or publish), you can modify the conditions under which they will be rendered. The following plugin properties will be considered: environments, excludePaths, and includePaths.\n\nIn the following example, the publish plugin would only appear in the preview environment, and only if the path of the current resource does not contain a drafts segment:\n\n{\n  \"plugins\": [\n    {\n      \"id\": \"publish\",\n      \"excludePaths\": [\"**/drafts/**\"],\n      \"environments\": [\"preview\"]\n    }\n  ]\n}\n\nPublish Confirmations\n\nThe publish plugin can be configured to show a confirmation dialog prior to executing the action, prompting the user to verify the action and giving them the ability to cancel it. This can be used as an extra precaution, for example when dealing with critical content that could have business implications if published erroneously:\n\n{\n  \"plugins\": [\n    {\n      \"id\": \"publish\",\n      \"confirm\": true\n    }\n  ]\n}\n\n\nNote: Publish confirmations can only be enabled for your entire site.\n\nCustom Edit URLs\n\nIf your project does not use SharePoint or Google Drive as content source, you can tell the sidekick how to link to your custom editing environment when the user clicks Edit.\n\nThe following two config options are available:\n\neditUrlLabel (string) set the label visible to the user\neditUrlPattern (string) defines an URL pattern for the custom editing environment. Supports placeholders like {{contentSourceUrl}} or {{pathname}}.\n{\n  \"editUrlLabel\": \"Your Editor\",\n  \"editUrlPattern\": \"{{contentSourceUrl}}{{pathname}}?cmd=open\"\n}\n\nSpecial Views\n\nYou can specify a special view for the sidekick to redirect to when the current tab’s URL matches a certain pattern. This can help you provide a seamless user experience across different media types, and also enables the execution of custom code (event-based plugins). The original resource URL will be available in a url query parameter.\n\nThe properties path and viewer are mandatory. Optionally, you can specify a title that will be shown at the top, and you can provide localized titles in a titleI18n object:\n\n{\n  \"specialViews\": [\n    {\n      \"title\": \"Custom JSON Viewer\",\n      \"path\" : \"**.json\",\n      \"viewer\": \"/tools/sidekick/custom-json-viewer/index.html\"\n    }\n  ]\n}\n\n\n\nAt the path specified by viewer, add an HTML file to your GitHub repository, for example:\n\n<html>\n<head>\n  <title>Custom JSON Viewer</title>\n  <meta http-equiv=\"Content-Type\" content=\"text/html;charset=UTF-8\">\n  <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n  <link rel=\"stylesheet\" href=\"./custom-json-viewer.css\">\n  <!-- add your custom code -->\n  <script src=\"./custom-json-viewer.js\"></script>\n</head>\n  <body>\n  </body>\n</html>\n\n\nAdd an optional CSS file in the same directory, and a JS file with your custom logic, for example:\n\ntry {\n  // get the resource URL from the url query parameter\n  const url = new URL(window.location.href).searchParams.get('url');\n  if (url) {\n    const res = await fetch(url);\n    if (res.ok) {\n      const text = await res.text();\n      const data = JSON.parse(text);\n      // do something with the data, e.g.\n      document.body.innerHTML = `\n        <pre>\n          ${JSON.stringify(data, null, 2)}\n        </pre>\n       `;\n    } else {\n      throw new Error(`failed to load ${url}: ${res.status}`);\n    }\n  }\n} catch (e) {\n  console.error('error rendering custom json view', e);\n}\n\nDevelopment Workflow\n\nThe following workflows are designed for detached sidekick development to prevent unintentional disruptions for the authors on your production site:\n\nUsing a Site Copy\n\nIf your site’s configuration is stored in the Configuration Service, you can use temporary site copies for sidekick development:\n\nCreate a copy of your site in the Configuration Service. For example, if the name of your original site is site1, you could create a site1-dev, reusing the same code and content.\nOpen the preview URL for site1-dev in your browser: https://main--site1-dev--org.aem.page.\nMake your desired changes to the sidekick object in the site1-dev configuration.\nRefresh the browser tab after each change to test your changes.\nWhen done, copy the sidekick object from site1-dev to the site1 configuration to roll your changes out to all authors.\n\nNote: When using the sidekick in an editor environment (Google Drive or Microsoft Sharepoint), it will load the config from the original site by default. If you want the sidekick to let you choose which configuration to load, first add the new site to your sidekick from the preview or live URL. Now the sidekick will display a picker with all matching sites.\n\nUsing a Repository Branch\n\nIf your site’s configuration is not stored in the Configuration Service, you can use a branch in GitHub for sidekick development:\n\nOn your site’s GitHub repository, create a branch from main. For this example, we’ll use dev as the branch name.\nOpen the preview URL for the dev branch in your browser: https://dev--site1--org.aem.page.\nOpen or create the following file in your repository: /tools/sidekick/config.json.\nMake your desired changes to the sidekick configuration file and push changes to the dev branch.\nRefresh the browser tab after each change to test your changes.\nWhen done, create a pull request and merge the changes to the main branch of your repository.\n\nCaution: Never commit directly to the main branch in your original repository. Always create a branch and ask for a review of your changes via pull request before merging into main.\n\nNote: When using the sidekick in an editor environment (Google Drive or Microsoft Sharepoint), it will load the config from the original site by default. If you want the sidekick to let you choose which configuration to load, first add the new site to your sidekick from the preview or live URL. Now the sidekick will display a picker with all matching sites.","lastModified":"1769454403","labs":""},{"path":"/developer/anatomy-of-a-project","title":"The Anatomy of a Project","image":"/developer/media_1d393eba317b100eecbfacc5c6e7edda818af6fba.png?width=1200&format=pjpg&optimize=medium","description":"This document describes what a typical project looks like from a code standpoint. Before reading this document, please familiarize yourself with the Developer Tutorial.","content":"style\ncontent\n\nThe Anatomy of a Project\n\nThis document describes what a typical project looks like from a code standpoint. Before reading this document, please familiarize yourself with the Developer Tutorial.\n\nCode: Git and GitHub\n\nOne of our defining philosophies is that it is easiest to allow users to work with the tools that they are familiar with. The overwhelming majority of developers manage their code in systems based on Git, so it only makes sense to allow developers to work with Git to manage and deploy their code.\n\nWe are using a buildless approach that runs directly from your GitHub repo. After installing the AEM GitHub bot on your repo, websites are automatically created for each of your branches for content preview on https://<branch>--<repo>--<owner>.aem.page/ and the production site on https://<branch>--<repo>--<owner>.aem.live/ for published content.\n\nEvery resource that you put into your GitHub repo is available on your website, so a file in your GitHub repo on the main branch in /scripts/scripts.js will be available on https://main--<repo>--<owner>.aem.page/scripts/scripts.js\nThis should be very intuitive. There are few “special” files that Adobe Experience Manager uses to connect the content into your website.\n\nIf you wish to use a git host other than GitHub, see Bring your own git.\n\nWe strongly recommend that repos are kept public on GitHub, to foster the community. For a public facing website there really is no need to keep the code hidden, as it is being served to the browsers of your website.\n\nImportant notes:\n\nThe combination <branch>--<repo>--<owner> must not exceed 63 characters (including the hyphens/dashes). This is a subdomain name constraint.\nbranch, repo and owner cannot contain --.\nThe Entry Point: head.html\n\nThe head.html file is the most important extension point to influence the markup of the content. The easiest way to think of it is that this file is injected on the server side as part of the <head> HTML tag and is combined with the metadata coming from the content.\n\nThe head.html should remain largely unchanged from the boilerplate and there are only very few legitimate reasons in a regular project to make changes. Those include remapping your project to a different base URL for the purposes of exposing your project in a different folder than the root folder of your domain on your CDN or to support legacy browsers which usually require scripts that are not loaded as modules.\n\nAdding marketing technology like Adobe Web SDK, Google Tag Manager or other 3rd party scripts to your head.html file is strongly advised against due to performance impacts. Adding inline scripts or styles to head.html is also not advisable for performance and code management reasons, see the section Scripts and Styles below for more information about handling scripts and styles.\n\nPlease see the following examples.\n\nhttps://github.com/adobe/helix-project-boilerplate/blob/main/head.html\nhttps://github.com/adobe/express-website/blob/main/head.html\nhttps://github.com/adobe/business-website/blob/main/head.html\nNot Found: 404.html\n\nTo create a custom 404 response, place a 404.html file into the root of your github repository. This will be served on any URL that doesn’t map to an existing resource in either content or code, and replaces the body of the out of the box minimalist 404 response.\n\n\nThe 404 can mimic the markup of an existing page including your code for the site, with navigation footers etc., or it can have a completely different appearance.\n\nPlease see Error Pages for more info and the following examples.\n\nhttps://github.com/adobe/design-website/blob/main/404.html See in Action\nhttps://github.com/adobecom/blog/blob/main/404.html See in Action\nDon’t Serve: .hlxignore\n\nThere are some files in your repo that should not be served from your website, either because you would like to keep them private or they are not relevant to the delivery of the website (e.g. tests, build tools, build artifacts, etc.) and don’t need to be observed by the AEM bot. You can add those to a .hlxignore file in the same format as the well-known .gitignore file.\n\nPlease see the following example.\n\nhttps://github.com/adobe/helix-website/blob/main/.hlxignore\nConfiguration\n\nConfiguration of your site is managed exclusively via the Configuration service. This section covers the most important configurations you need to know about for your site.\n\nThe Content Connection\n\nThe content configuration defines the connection between your site and its content source, telling us where to get content from when pages are previewed.\n\nDocument Authoring, AEM Authoring, Sharepoint, and Google Drive are all natively supported, with additional sources possible via Bring Your Own Markup\n\nSee the Content Config API documentation for more information on how to set this up.\n\nTame the Bots: robots.txt\n\nA robots.txt file is generally a regular file that is served as you would expect on your production website on your own domain. To protect your preview and origin site from being indexed, your .page and .live sites will serve a robots.txt file that disallows all robots instead of the robots.txt file from your repo.\n\nThe content of your robots.txt is configured with the Robots Config API.\n\nQuery and Indexing\n\nThere is a flexible indexing facility that lets you keep track of all of your content pages conveniently as a spreadsheet or json. This facility is often used to show lists or feeds of pages as well as to filter and sort content on a website.\n\nFor sites with google drive or sharepoint content sources, no special configuration is required beyond creating and publishing specially named spreadsheets. For more advanced indexing, or for sites using a BYOM source, you can use the Index Config API to define indices.\n\nSee the document Indexing for more information.\n\nAutomate Your Sitemap\n\nComplex sitemaps can automatically be created for you whenever authors publish new content, including flexible hreflang mappings where needed. This functionality is usually based on the indexing facility.\nSee the document Sitemaps for sitemap configuration options.\n\nCommonly Used File and Folder Structure\n\nBeyond the files treated as special or configuration files, there is a commonly-used structure that is expressed in the Boilerplate repo.\n\nThe common folders below are usually in the root directory of a project repo, but in cases where only a portion of a website is handled by AEM, they are often moved to a subfolder to reflect the mapping of the route of the origin in a CDN.\n\nThis means that in a case where for example only /en/blog/ is initially mapped to AEM from the CDN, all the folder structures below (eg. /scripts, /styles, /blocks etc.) are moved into a the /en/blog/ folder in GitHub to keep the CDN mapping as simple as possible.\n\nWith a simple adjustment of the reference to scripts.js and styles.css in head.html (see above) it is possible to indicate that all the necessary files are loaded from the respective code base directory. To avoid url rewriting the folder structure is also created the content source (eg. sharepoint or google drive) by having a directory structure of /en/blog/.\nIn many cases as the AEM footprint grows on a site there is a point in time when the code gets moved back to the root folder and the head.html references are adjusted accordingly.\n\nScripts and Styles\n\nBy convention in a AEM project, the head.html references styles.css, scripts.js, and aem.js located in /scripts and /styles, as the entry points for the project code.\n\nscripts.js is where your global custom javascript code lives and is where the block loading code is triggered. styles.css hosts the global styling information for your site, and minimally contains the global layout information that is needed to display the Largest Contentful Paint (LCP).\n\nAs all three files are loaded before the page can be displayed, it is important that they are kept relatively small and executed efficiently.\n\nBeyond styles.css, a lazy-styles.css file is commonly used, which is loaded after the LCP event, and therefore can contain slower/more CSS information. This could be a good place for fonts or global CSS that is below the fold.\n\nIn addition to scripts.js, there is the commonly-used delayed.js. This is a catch-all for libraries that need to be loaded on a page but should be kept from interfering with the delivery of the page. This is a good place for code that is outside of the control of your project and usually includes the martech stack and other libraries.\n\nPlease see the document Keeping it 100, Web Performance for more information about optimizing your site performance.\n\nBlocks\n\nMost of the project-specific CSS and JavaScript code lives in blocks. Authors create blocks in their documents. Developers then write the corresponding code that styles the blocks with CSS and/or decorates the DOM to take the markup of a block and transform it to the structure that’s needed or convenient for desired styling and functionality.\n\n\nThe block name is used as both the folder name of a block as well as the filename for the .css and .js files that are loaded by the block loader when a block is used on a page.\n\n\nThe block name is also used as the CSS class name on the block to allow for intuitive styling. The javascript is loaded as a module (ESM) and exports a default function that is executed as part of the block loading.\n\nA simple example is the Columns Block. It adds additional classes in JavaScript based on how many columns are in the respective instance created by the author. This allows it to be able to use flexible styling of content that is in two columns vs. three columns.\n\nIcons\n\nMost projects have SVG files that are usually added to the /icons folder, and can be referenced with a :<iconname>: notation by authors. By default, icons are inlined into the DOM so they can be styled with CSS, without having to create SVG symbols.\n\nPrevious\n\nDeveloper Tutorial\n\nUp Next\n\nBlock Collection","lastModified":"1772533737","labs":""},{"path":"/developer/martech-integration","title":"Configuring Adobe Experience Cloud Integration","image":"/developer/media_1ba5390c0b3026987033e62fe020d3f7dae6c0332.png?width=1200&format=pjpg&optimize=medium","description":"This article will walk you through the steps of setting up an integration with the Adobe Marketing Technology stack. The stack combines Adobe Experience Platform ...","content":"style\ncontent\n\nConfiguring Adobe Experience Cloud Integration\n\nThis article will walk you through the steps of setting up an integration with the Adobe Marketing Technology stack. The stack combines Adobe Experience Platform WebSDK, Adobe Analytics, Adobe Target or Adobe Journey Optimizer, Adobe Client Data Layer and Adobe Experience Platform Tags. This will let you personalize your pages, automatically track how they perform and track your custom events as well.\n\nChoosing the Right Integration\n\nThis document covers integration with Adobe's marketing technology stack\n(Adobe Analytics, Adobe Target/AJO, Adobe Experience Platform).\n\nLooking for Google Analytics and Google Tag Manager integration instead?\nSee our Google Analytics & Tag Manager Integration guide.\n\nWhen to Use This Integration\nYou're using Adobe Analytics as your primary analytics solution\nYou need Adobe Target or Adobe Journey Optimizer for personalization\nYou want to leverage Adobe Experience Platform's data capabilities\nYou're already invested in the Adobe Experience Cloud ecosystem\nIntegration Comparison\nFeature\t Adobe Experience Cloud\t Google Analytics & GTM \n Analytics\t Adobe Analytics\t Google Analytics 4 \n Tag Management\t Adobe Experience Platform Tags\t Google Tag Manager \n Personalization\t Adobe Target/AJO\t Limited (via GTM) \n Data Layer\t Adobe Client Data Layer\t Google Data Layer \n Cost\t Enterprise licensing\t Free tier available \n Privacy\t GDPR/CCPA compliant\t GDPR/CCPA compliant\nIntegration Overview\n\nThe Adobe Experience Cloud integration provides a comprehensive marketing technology stack that enables:\n\nReal-time personalization through Adobe Target or Adobe Journey Optimizer\nAdvanced analytics with Adobe Analytics and Experience Platform\nUnified data collection via Adobe Client Data Layer\nTag management through Adobe Experience Platform Tags\nCross-channel customer journey tracking\n\nBefore you go further, please also check our native Experimentation capabilities.\n\nHow It Works\n\nThe Adobe Experience Cloud stack components work together in a coordinated manner:\n\nAdobe Experience Platform WebSDK: allows the page to interact with the Adobe Experience Platform services\nAdobe Target and Adobe Journey Optimizer: personalizes and optimizes the page with your desired marketing campaigns\nAdobe Analytics: tracks and analyzes the data for the customer journey\nAdobe Client Data Layer: offers a standard method to collect and store client-side events during the customer journey before they are submitted to Adobe Analytics\nAdobe Experience Platform Tags: lets you deploy and manage the tags that power the customer experience\n\nThe personalization rules will be dynamically evaluated server-side by Adobe Target, or Adobe Journey Optimizer, and will be delivered as a list of page modifications that will be applied as the page renders each block in order to minimize content flicker. Once the page is fully rendered, the Adobe Analytics instrumentation is done and key business events are captured. Finally, Adobe Experience Platform Tags is loaded and applies the rules and data elements you defined in a delayed manner to limit the performance impact on the initial page load.\n\nRationale\n\nA traditional all-in-one instrumentation done solely via Adobe Experience Platform Tags typically either has a performance on the initial page load, or ends up introducing content flicker if delayed when the personalization of the page is applied.\n\nOur optimized approach builds on top of:\n\nTop and bottom of page events so we can enable personalization early in the page load, and wait for the page to fully render to report metrics\nData object variable mapping so we can gather key page metadata for your page in Adobe Analytics\n\nOn top of this, we also fine-tuned the code to:\n\nAvoid content flicker as the DOM is dynamically rendered to support AEM EDS and/or SPA use cases\nDynamically load personalization and data layer dependencies only when needed\nAllow mapping AEM RUM data to Adobe Analytics to enrich your reports\nGracefully handling speculative pre-rendering rules\nPerformance Considerations\n\nIn our tests, you can expect a baseline performance impact as follows. The performance varies depending on whether Adobe Target or Adobe Journey Optimizer applies personalization changes to your page.\n\nPage Modifications Explained:\n\nWithout page modifications: When no personalization rules match the current visitor or page, so the page renders normally without DOM changes\nWith page modifications: When Adobe Target/AJO identifies matching personalization rules and applies DOM modifications (changing content, layout, or adding elements based on experimentation or personalization campaigns)\n\nTo the baseline impact, you'd also need to add the overhead of more complex page modifications, especially when using custom JavaScript snippets in your personalization rules.\n\nMobile\n\t Largest Contentful Paint\t Total Blocking Time\t PageSpeed \n Without page modifications\t +0.2s\t 0~10ms\t 0~1 pts \n With page modifications\t +1.2s\t 0~10ms\t 1~5 pts\nDesktop\n\t Largest Contentful Paint\t Total Blocking Time\t PageSpeed \n Without page modifications\t +0.2s\t 0ms\t 0~1 pts \n With page modifications\t +0.6s\t 0~10ms\t 0~4 pts\nPre-requisites\n\nBefore you can leverage this plugin, please make sure you have access to:\n\nAdobe Experience Platform (no full license needed, just basic permissions for data collection)\nAdobe Target or Adobe Journey Optimizer\nAdobe Analytics\n\nYou’ll also need to have pre-configured your:\n\nDatastream in the Adobe Experience Platform to connect to your Adobe Target and Adobe Analytics solutions\nAdobe Experience Platform Tags (Launch) containers, and make sure that\nthe Adobe Client Data layer is enabled, and that you have checked the “Inject Adobe Client Data Layer (ACDL) library if not present”\nyou do not add the Adobe Experience Platform Web SDK, the Adobe Analytics or the Adobe Target extensions. Those is added automatically by our plugin\nInstallation & Configuration\nStep 1: Install the Plugin\n\nFollow the technical steps documented in the aem-martech GitHub repository. Make sure the dataLayerInstanceName in the configuration matches the name you used in the ACDL extension in your Launch container (it will default to adobeDataLayer on both sides).\n\nStep 2: Configure Consent Management\n\nMake sure your consent management is properly connected to the plugin as documented in the repository. This is crucial for GDPR/CCPA compliance.\n\nLegal Disclaimer: This library defaults user consent to pending to comply with privacy regulations. Overriding this behavior to grant consent by default (e.g., setting it to in) without explicit user agreement may have significant legal implications under regulations like GDPR and CCPA. We strongly advise consulting with your legal team before altering the default consent handling.\n\nStep 3: Deploy Your Code\n\nCommit and push your code to trigger the deployment.\n\nStep 4: Configure Adobe Target\n\nSet up an experiment in Adobe Target and preview the page to ensure the integration is working.\n\nStep 5: Enable Instrumentation\n\nAdd the Target metadata property to your page to trigger the instrumentation, or the equivalent solution you used to set the personalization config flag.\n\nVerification & Testing\n\nIf the instrumentation is properly done, you should see the following calls in your browser's Network tab when you load the page. Whether the page is actually modified or not will depend on the configuration you set in Adobe Target.\n\nExpected Network Calls\nhttps://edge.adobedc.net/ee/v1/interact: fetches the Adobe Target propositions (i.e. page modifications)\nhttps://edge.adobedc.net/ee/v1/collect: tracks a page view event in Adobe Analytics\nhttps://edge.adobedc.net/ee/v1/privacy/set-consent: persists the user consent into Adobe Experience Platform\nTesting Tools\nUse your browser's Developer Tools Network tab to monitor API calls\nCheck the Adobe Experience Platform Debugger browser extension\nVerify data flow in Adobe Analytics reports\nTest personalization rules in Adobe Target\nPrivacy & Consent Management\n\nThe integration includes comprehensive privacy controls:\n\nDefault consent state: Set to \"pending\" to comply with privacy regulations\nGranular consent categories: Support for collect, marketing, personalize, and share permissions\nConsent management integration: Compatible with major consent management platforms\nData minimization: Only collects data when explicit consent is given\nTroubleshooting\nCommon Issues\n\nNo personalization appearing:\n\nCheck that the Target metadata property is set\nVerify your Adobe Target configuration\nEnsure consent is properly granted\n\nAnalytics data not flowing:\n\nVerify your datastream configuration\nCheck consent settings\nMonitor network calls for errors\n\nPerformance issues:\n\nReview your personalization rules complexity\nConsider using lazy loading for non-critical tags\nMonitor Core Web Vitals impact\nNext Steps & Related Resources\nAdditional Documentation\nComprehensive Tutorial: Experience League Tutorial\nNative Experimentation: AEM EDS Experimentation Capabilities\nAlternative Integration Options\nGoogle Analytics & GTM: GTM MarTech Integration\nTechnical Resources\nGitHub Repository: adobe-rnd/aem-martech\nAdobe Experience Platform: WebSDK Documentation\nAdobe Target: Implementation Guide\nAdobe Analytics: Implementation Guide","lastModified":"1754060924","labs":"AEM Sites"},{"path":"/developer/cli-reference","title":"aem Command Line Reference","image":"/developer/media_130390ea6c8366c27bfda6908b30667a11e564fc6.png?width=1200&format=pjpg&optimize=medium","description":"","content":"style\ncontent\n\naem Command Line Reference\nInstallation\n$ npm install -g @adobe/aem-cli\n\nUsage\n$ aem --help\n\n\nHelp output:\n\nUsage: aem <command> [options]\n\nCommands:\n  aem up      Run a AEM development server\n  aem import  Run the AEM import server\n\nOptions:\n  --version                Show version number                         [boolean]\n  --log-file, --logFile    Log file (use \"-\" for stdout)  [array] [default: \"-\"]\n  --log-level, --logLevel  Log level\n  [string] [choices: \"silly\", \"debug\", \"verbose\", \"info\", \"warn\", \"error\"] [default: \"info\"]\n  --help                   Show help                                   [boolean]\n\nuse <command> --help to get command specific details.\n\nfor more information, find our manual at https://github.com/adobe/helix-cli\n\nAvailable Commands\nup\nimport\nup\nServer Options\n--port\nStart development server on port\nDefault: 3000\n--addr\nBind development server on address. Use * to bind to any address and allow external connections.\nDefault: \"127.0.0.1\"\n--stop-other, --stopOther\nStop other AEM CLI running on the above port\nDefault: true\n--tls-cert, --tlsCert\nFile location for your .pem file for local TLS support\n--tls-key, --tlsKey\nFile location for your .key file for local TLS support\nAEM Options\n--url, --pagesUrl, --pages-url\nThe origin URL to fetch content from.\n--livereload\nEnable automatic reloading of modified sources in the browser.\nDefault: true\n--no-livereload, --noLiveReload\nDisable live-reload\n--open\nOpen a browser window at specified path\nDefault: \"/\"\n--no-open, --noOpen\nDisable automatic opening of browser window\n--print-index, --printIndex\nPrints the indexed records for the current page.\nDefault: false\n--cache\n--forward-browser-logs, -–forwardBrowserLogs\nForward browser console logs to terminal\nDefault: false\nOptions\n--version\nShow version number\nDefault: boolean\n--log-file, --logFile\nLog file (use \"-\" for stdout)\nDefault: \"-\"\n--log-level, --logLevel\nLog level\nChoices: \"silly\", \"debug\", \"verbose\", \"info\", \"warn\", \"error\"\nDefault: \"info\"\n--site-token, --siteToken\nSite token to be used by the cli to access the website\n--alpha-cache, --alphaCache\nPath to local folder to cache the responses (alpha feature, may be removed without notice)\n--allow-insecure, --allowInsecure\nWhether to allow insecure requests to the server\nDefault: false\n--cookies\nProxy all cookies in requests. By default, only the hlx-auth-token cookie is proxied.\nDefault: false\n--html-folder, --htmlFolder\nServe HTML files from this folder without extensions (e.g., /folder/file serves folder/file.html or folder/file.plain.html) use this to preview content changes if you do not have access to the authoring system. Can be helpful for Developing with AI Tools\n--help\nShow help\nDefault: boolean\nimport\nServer Options\n--port\nStart import server on port\nDefault: 3001\n--addr\nBind import server on address. Use * to bind to any address and allow external connections.\nDefault: \"127.0.0.1\"\n--stop-other, --stopOther\nStop other AEM CLI running on the above port\nDefault: true\n--tls-cert, --tlsCert\nFile location for your .pem file for local TLS support\n--tls-key, --tlsKey\nFile location for your .key file for local TLS support\nAEM Importer Options\n--open\nOpen a browser window at specified path\nDefault: \"/tools/importer/helix-importer-ui/index.html\"\n--no-open, --noOpen\nDisable automatic opening of browser window\n--cache\nPath to local folder to cache the responses\n--ui-repo, --uiRepo\nGit repository for the AEM Importer UI. A fragment may indicate a branch other than main.\nDefault: \"https://github.com/adobe/helix-importer-ui\"\n--skip-ui, --skipUI\nDo not install the AEM Importer UI\nDefault: false\n--headers-file, --headersFile\nLocation of a custom .json file containing headers to be used with all proxy requests\nOptions\n--version\nShow version number\nDefault: boolean\n--log-file, --logFile\nLog file (use \"-\" for stdout)\nDefault: \"-\"\n--log-level, --logLevel\nLog level\nChoices: \"silly\", \"debug\", \"verbose\", \"info\", \"warn\", \"error\"\nDefault: \"info\"\n--dump-headers, --dumpHeaders\nDump request headers to console for debugging\nDefault: false\n--allow-insecure, --allowInsecure\nWhether to allow insecure requests to the server\nDefault: true\n--help\nShow help\nDefault: boolean","lastModified":"1763069066","labs":""},{"path":"/docs/unsupported","title":"Unsupported Integrations","image":"/docs/media_1c28758b79582d19b79d7257cc9a5a8a0dedc9182.png?width=1200&format=pjpg&optimize=medium","description":"Don't try these unsupported integrations at home","content":"style\ncontent\n\nUnsupported Integrations\n\nAs a composable architecture, AEM embraces integrations with customer’s preferred infrastructure, be it Content Delivery Networks, Content Authoring and Content Repositories, or Web Optimization and Analytics software.\n\nThere are a number of integration patterns that have proven to be problematic for security, availability and performance reasons. These patterns are generally discouraged by Adobe. Refer to this document for an outline of integration patterns that are frequently causing issues for AEM customers.\n\nUnsupported Content Delivery Networks\n\nAdobe Experience Manager supports a wide set of Content Delivery Networks (CDNs) and offers deep integrations, including optimized time-to-live (TTL) and surgical invalidation upon content or code update.\n\nFor CDNs not included in this list, following common problems can be observed:\n\nCaching relies on fixed TTLs, slowing down the rollout of content updates and code changes\nCaching is often misconfigured or disabled, increasing time-to-first-byte (TTFB) and decreasing web performance\nOrigin requests sometimes use insecure Transport Layer Security (TLS) practices such as domain fronting, which impedes availability and security of the site\n\nTo rectify this, we recommend customers to switch to a supported CDN. Every Adobe Experience Manager license includes access to an Adobe-managed, supported CDN and we provide a guide for picking the right supported CDN.\n\nDiscouraged Security Practices\n\nAdobe Experience Manager supports various security configurations and integrations with Web Application Firewalls (WAFs) and security tools. However, certain security practices have proven to be problematic for performance and reliability.\n\nTLS Interception\n\nTLS interception or SSL inspection, while intended to enhance security, often creates the following issues:\n\nTLS interception introduces multiple challenges that can severely impact your site's performance and security posture. The additional processing required for intercepting and re-encrypting traffic creates noticeable latency, while improper certificate handling can introduce new security vulnerabilities rather than preventing them.\n\nFurthermore, these interception practices often conflict with modern security protocols, breaking the fundamental promise of end-to-end encryption that many applications rely on. When TLS connections are intercepted, the original security guarantees between the client and server are compromised, potentially exposing sensitive data to unnecessary risks.\n\nFinally, incomplete rollout of custom Certificate Agency (CA) certificate to developers, causing certificate rejection issues in the AEM CLI.\n\nWe recommend implementing end-to-end encryption without intermediate TLS termination points, utilizing modern security features built into supported CDNs.\n\nWeb Application Firewalls\n\nWhile WAFs are essential for security, certain implementations can negatively impact site performance.\n\nWeb Application Firewalls can introduce performance challenges through synchronous request processing, complex pattern matching rules, inefficient geographic routing, and interference with CDN caching. These factors combine to create unnecessary latency and diminish the performance benefits of your content delivery architecture.\n\nFor optimal security and performance, consider using Adobe's built-in WAF security features or implementing WAF solutions through supported CDN providers.\n\nPrevious\n\nChina","lastModified":"1734632496","labs":""},{"path":"/docs/byo-cdn-adobe-managed","title":"Adobe Managed CDN","image":"/docs/media_1581b6aee2f43df1d92f83787350d56a1adea76ea.png?width=1200&format=pjpg&optimize=medium","description":"The following steps illustrate how to use the Adobe Managed CDN (part of Edge Delivery Services entitlement) to configure a property to deliver content from ...","content":"style\ncontent\n\nAdobe Managed CDN\n\nThe following steps illustrate how to use the Adobe Managed CDN (part of Edge Delivery Services entitlement) to configure a property to deliver content from a site powered by Edge Delivery Services in Adobe Experience Manager Sites as a Cloud Service.\n\nPrerequisites and limitations\nYou must have a license with an Edge Delivery Services entitlement.\nYou must own the domain under which you want to serve your site.\nYou need to be able to make Domain Name System (DNS) changes to the domain name.\nYou have the choice to bring your own certificate or ask Adobe to provide one provisioned by Let’s Encrypt.\nHTTPS is required and IPv6 is not supported.\nBefore you go live\n\nThere are two deployment options for going live with Adobe Managed CDN\n\nSetup an HTTP proxy from an existing AEM Sites as a Cloud Service environment. This is typically used when you already have an existing environment and you want to migrate part of a site to Edge Delivery Services. You can also add a new environment.\nSetup a new Edge Delivery site independently of an AEM Sites as a Cloud Service environment. This is the approach used when you do not have an AEM author or publish environment and you want to use Edge Delivery Services on its own. See here for benefits.\n\nA checklist of the steps you need to do for both options:\n\nInstall or request a certificate in Cloud Manager (you need to do that for both www and non-www apex domains)\nInstall the domains Cloud Manager (you need to do that for both www and apex domains)\nAdd a CDN configuration to map your domain to your Edge Delivery site. (This will be done differently depending on what deployment option you chose)\nSetup push invalidation in the project's configuration\nPoint the DNS CNAME of your www site to cdn.adobeaemcloud.com and the A records of your apex to the IP addresses listed below.\n(Option 1) Setup a proxy from an existing environment\n\nThis requires an existing AEM Sites as a Cloud Service environment that needs to be configured via configuration pipeline to proxy some (or all) paths on your domain to your Edge Delivery Site. (see how to define a proxy using originSelectors and how to run a Configuration Pipeline).\n\nA redirect from https://example.com to https://www.example.com must be defined using a redirect rule, but all other redirects may use the redirects spreadsheet\n\n(Option 2) Setup an Edge Delivery site without an existing environment\n\nIn case you do not have an existing authoring/publish environment, follow the steps for setting up a new Edge Delivery site in Cloud Manager.\n\nAn automatic redirect will be established from https://example.com to https://www.example.com but all other redirects must happen through the redirects spreadsheet\n\nSetup push invalidation\n\nPush invalidation automatically purges content on the managed CDN whenever an author publishes content changes.\n\nContent is purged by URL and by cache tag/key.\n\nPush invalidation is enabled by adding specific properties to the project's configuration. Depending on how you author your content, this is done in two different ways:\n\nIf you author your content with Sharepoint or Google Docs, create an Excel workbook named .helix/config.xlsx in Sharepoint or a Google Sheet named .helix/config in Google Drive.\nIf you author your content with AEM Authoring and the Universal Editor, create a spreadsheet called configuration in the Sites console.\nIf your site is using the configuration service, see here how to update the CDN configuration.\n\nIn the spreadsheet, define the following configuration properties:\n\nkey\t value\t comment \n cdn.prod.host\t <Production Host>\t Host name of production site, e.g. www.example.com \n cdn.prod.type\t managed\t \n cdn.prod.envId\t <environment ID>\t optional (format: pXXXX-eXXXX)\n\nAfter making changes to the configuration sheet, preview it with the Sidekick or publish it in the Sites console to activate the changes.\n\n(Optional) Lower the TTL of existing DNS records before going live to enable a faster rollout\n\nIf the DNS records for your go-live domain have a long time-to-live (TTL)—such as 12 hours or more—you may want to temporarily lower the TTL (for example, to 1 hour or even 60 seconds) to speed up site rollout. Doing so ensures traffic is routed to the new site more quickly after launch. Be sure to restore the original TTL value when updating the DNS records for go-live.\n\nAs you go live\n\nMake or request following DNS change:\n\nFor www.example.com set the CNAME DNS record to cdn.adobeaemcloud.com (see details).\nUse a high time-to-live (TTL) value of 3600 seconds (one hour) or more to improve site reliability during a DNS outage. If you previously lowered the TTL for a faster go-live rollout, make sure to restore it to its original longer value.\n\nIf you want to establish an automatic redirect from https://example.com to https://www.example.com, the following additional steps are required\n\nFor example.com set the A record to these four IP addresses: 151.101.3.10, 151.101.67.10, 151.101.131.10, and 151.101.195.10 (see details)\nIf your DNS provider cannot support multiple IP addresses in an A record (for example, GoDaddy), use any of the listed IP addresses. This will be a bit less reliable.\nUse the same high TTL as described above\n\nDepending on your DNS setup—including the TTL specified in your DNS records—changes can take anywhere from a few minutes to several hours to propagate. If your site already receives traffic, ensure all redirects are configured or keep the existing site live until the TTL expires and all traffic is routed to the new site.\n\nIf you are hosting both the apex and www versions of the domain, make sure DNS is properly configured for both to enable certificate generation.\n\nFor a timed launch, consider setting up a temporary holding homepage that you can quickly replace with the final content at the launch time. Because publishing in AEM is fast and reliable, this approach gives you better control over when your content becomes publicly visible than relying solely on DNS changes.\n\nAfter you go live\n\nThe AEM team reviews your setup and notifies you if any issues are found.\n\nPrevious\n\nBYO CDN Setup Overview","lastModified":"1753888263","labs":""},{"path":"/developer/admin-errors","title":"Backend Errors","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Backend error codes and templates in Adobe Experience Manager","content":"style\ncontent\n\nBackend Errors\n\nIn case of an error in a backend request, the Admin Service returns an appropriate 4xx or 5xx HTTP status code as well as the following HTTP headers in its response:\n\nx-error: the English error message\nx-error-code: the error code\n\nEach error code maps to a template, which a client can use to process the English error message, for example to translate it into the user’s preferred language or adapt the error message to the context it occurred in.\n\nError Codes and Templates\nCode\t Template\t Likely Root Cause \n AEM_BACKEND_FETCH_FAILED\t Unable to fetch '$1' from '$2': $3\t Backend fetch failed (network, permissions, server error). \n AEM_BACKEND_NOT_FOUND\t Unable to preview '$1': File not found\t Source document is missing or inaccessible. \n AEM_BACKEND_TYPE_UNSUPPORTED\t Unable to preview '$1': File type not supported: $2\t File type not supported by the preview system. \n AEM_BACKEND_NO_HANDLER\t Unable to preview '$1': No handler found for document: $2\t No processing handler available for this file type. \n AEM_BACKEND_NON_MATCHING_MEDIA\t Unable to preview '$1': content is not a '$2' but: $3\t Actual media type does not match expected type. \n AEM_BACKEND_VALIDATION_FAILED\t Unable to preview '$1': validation failed: $2\t Document failed backend validation checks. \n AEM_BACKEND_DOC_IMAGE_TOO_BIG\t Unable to preview '$1': source contains large image: $2\t Embedded image inside a doc exceeds size limits. \n AEM_BACKEND_UNSUPPORTED_MEDIA\t Unable to preview '$1': '$2' backend does not support file type.\t Specific backend does not support this media type. \n AEM_BACKEND_NO_CONTENT_TYPE\t Unable to preview '$1': Content type header is missing\t Source response missing Content-Type header. \n AEM_BACKEND_JSON_INVALID\t Unable to preview '$1': JSON fetched from markup is invalid: $2\t Markup contains malformed JSON. \n AEM_BACKEND_FILE_EMPTY\t Unable to preview '$1': File is empty, no markdown version available\t Source file is empty. \n AEM_BACKEND_FILE_TOO_BIG\t Unable to preview '$1': Documents larger than 100mb not supported: $2\t File exceeds 100 MB size limit. \n AEM_BACKEND_MP4_PARSING_FAILED\t Unable to preview '$1': Unable to parse MP4\t MP4 file corrupted or unparseable. \n AEM_BACKEND_MP4_TOO_LONG\t Unable to preview '$1': MP4 is longer than 2 minutes: $2\t MP4 exceeds duration limit. \n AEM_BACKEND_MP4_BIT_RATE_TOO_HIGH\t Unable to preview '$1': MP4 has a higher bitrate than 300 KB/s: $2\t MP4 exceeds bitrate threshold. \n AEM_BACKEND_ICO_TOO_BIG\t Unable to preview '$1': ICO is larger than 16KB: $2\t ICO exceeds maximum size. \n AEM_BACKEND_PDF_TOO_BIG\t Unable to preview '$1': PDF is larger than 10MB: $2\t PDF exceeds size limit. \n AEM_BACKEND_SVG_SCRIPTING_DETECTED\t Unable to preview '$1': Script or event handler detected in SVG at: $2\t Disallowed script/event attributes in SVG. \n AEM_BACKEND_SVG_ROOT_ITEM_MISSING\t Unable to preview '$1': Expected XML content with an SVG root item\t SVG missing <svg> root element. \n AEM_BACKEND_SVG_PARSING_FAILED\t Unable to preview '$1': Unable to parse SVG XML\t SVG XML is invalid or corrupted. \n AEM_BACKEND_SVG_TOO_BIG\t Unable to preview '$1': SVG is larger than 20KB: $2\t SVG file exceeds size limit. \n AEM_BACKEND_IMAGE_TOO_BIG\t Unable to preview '$1': Image is larger than $2: $3\t Image exceeds supported dimensions/size. \n AEM_BACKEND_CONFIG_EXISTS\t Config already exists\t Attempted to create a config that already exists. \n AEM_BACKEND_CONFIG_TYPE_MISSING\t No '$1' config in body or bad content type\t Config type missing from request or wrong content type. \n AEM_BACKEND_CONFIG_TYPE_INVALID\t Bad '$1' config: $2\t Invalid or malformed config type/value. \n AEM_BACKEND_CONFIG_MISSING\t Config not found\t Requested config does not exist. \n AEM_BACKEND_CONFIG_READ\t Error reading config: $1\t Config read operation failed. \n AEM_BACKEND_CONFIG_CREATE\t Error creating config: $1\t Config creation operation failed. \n AEM_BACKEND_CONFIG_UPDATE\t Error updating config: $1\t Config update operation failed. \n AEM_BACKEND_CONFIG_DELETE\t Error removing config: $1\t Config deletion operation failed.","lastModified":"1757056422","labs":""},{"path":"/docs/authoring-guide","title":"Where to author your site","image":"/docs/media_1b895c2a9e3ec92a97520c7ec15b142c357262663.png?width=1200&format=pjpg&optimize=medium","description":"Edge Delivery Services has built-in support for a variety of different content sources. We've listed the most popular ones for you below. Pick the one ...","content":"Where to author your site\n\nEdge Delivery Services has built-in support for a variety of different content sources. We've listed the most popular ones for you below. Pick the one best suited to your needs:\n\nMicrosoft SharePoint\n\nSharePoint is a widely used content repository for Word and Excel documents. Users are familiar with these tools, requiring no extra training.\n\nSet up your SharePoint\n\nGoogle Drive\n\nGoogle Docs and Sheets are popular for content repositories.\n\nGet started with Google Drive\n\nUniversal Editor in AEM Sites\n\nManage your content in AEM Sites and use the Universal Editor for in-context authoring.\n\nGet early access to Universal Editor\n\nDocument Authoring\n\nDocument Authoring for Edge Delivery Services offers a user-friendly, performant and highly available document-based authoring experience.\n\nStart with Document Authoring in early access\n\nNot sure yet?\n\nIf you are unsure which option is best for you, answer the questions below and follow the recommendation.\n\nRubric\nhttps://main--helix-website--adobe.aem.page/tools/decisions.json?sheet=authoring\nMicrosoft Word & Excel\n\nSharePoint is a widely used content repository for Word and Excel documents. Users are familiar with these tools, requiring no extra training. Despite some rate limiting, SharePoint remains a reliable option for content management.\n\nhttps://www.aem.live/docs/authoring\n\nGoogle Docs & Sheets\n\nGoogle Docs and Sheets are popular for content repositories. Users need no additional training, and while not highly structured for automation, they are user-friendly and widely adopted.\n\nhttps://www.aem.live/docs/authoring\n\nAEM Content Fragment Editor\n\nAEM Content Fragment Editor is designed for the needs of headless applications, generating JSON for Edge Delivery Services, among other things. It requires authors to follow the content structure enforced by application developers and may therefore require additional training.\n\nhttps://experienceleague.adobe.com/en/docs/experience-manager-learn/sites/content-fragments/content-fragments-feature-video-use\n\nAEM Universal Editor\n\nThe Universal Editor in Adobe Experience Manager Sites offers live-preview of the content to be authored, combined with a structure-first approach to authoring, guided by tight field definitions. It requires initial training but provides great flexibility by mapping content to front-end blocks. Note that this is not the usual AEM Page Editor experience that some might be used to in AEM.\n\nhttps://experienceleague.adobe.com/en/docs/experience-manager-cloud-service/content/implementing/developing/universal-editor/introduction\n\nDocument Authoring\n\nDocument Authoring for Edge Delivery Services offers a user-friendly, high-performance, and highly available document-based authoring experience. It requires minimal training, though automation and workflows may need some customization.\n\nhttps://da.live/docs\n\nstyle\ncontent","lastModified":"1756307814","labs":""},{"path":"/developer/sidekick-v7-migration","title":"Migrating Your Sidekick v6 Customizations","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"The goal of this document is to explain how developers can migrate existing custom code to work with the new sidekick (v7). This information is ...","content":"style\ncontent\nMigrating Your Sidekick v6 Customizations\n\nThe goal of this document is to explain how developers can migrate existing custom code to work with the new sidekick (v7). This information is complementary to what is detailed in Developing for the Sidekick.\n\nDetecting the Sidekick Element\n\nThe new sidekick uses a different custom element: look for references to helix-sidekick in your code and change them to aem-sidekick, or, to continue to support both versions until all of your users have switched to the new one, execute the same commands for both.\n\nFor a code example how to detect the presence of the sidekick element, see Listening for Events.\n\nExisting Event Listeners\n\nSome event names have changed in the new sidekick for consistency and legibility. Look for references to event names in your code and adjust them where applicable.\n\nHere’s a complete list of event names that have changed:\n\nEvent Name in v6\t Event Name in v7 \n helix-sidekick-ready\t sidekick-ready \n envswitched\t env-switched \n pluginused\t plugin-used \n loggedin\t logged-in \n loggedout\t logged-out \n statusfetched\t status-fetched\n\nFor a detailed description of all events, their payloads, and how to listen to them, see Events.\n\nEvent Payloads\n\nAn event’s payload is now found directly in its event.detail property. Before it was stored in event.detail.data.\n\nExisting Customizations\n\nThe config schema remains the same for both versions, so your existing plugins and special views will be compatible with the new sidekick.\n\nSee Customizing the Sidekick for a detailed description of all possible customization options.","lastModified":"1747396349","labs":""},{"path":"/docs/limits","title":"Limits","image":"/docs/media_12128144bfee4f64ba2f701646dc5edc48f309a9a.png?width=1200&format=pjpg&optimize=medium","description":"Size and Rate Limits","content":"style\ncontent\n\nLimits\n\nTo ensure functioning within normal parameters and system stability for all customers, we enforce limits across many dimensions. The limits are designed to prevent accidental misuse and are subject to change and often can individually be raised on a per customer basis.\n\nIf you find yourself in a situation where you think that the limits are prohibitive for your use case, contact us and let us know.\n\nDelivery Limits\nDocument Naming Restrictions\n\nThe URLs generated for document names will only take into account the following characters:\n\nalphabetical, lowercase characters (a-z)\nnumbers (0-9)\nsimple dash (-)\n\nAll other characters are replaced with a single dash, and the final document name will have leading and trailing dashes removed.\n\nThe full file path for a document should not exceed 900 characters.\n\nSupported File Types\n\nThe following file types can be delivered from the Content Bus and Media Bus:\n\nHTML (extension-less only), JSON, MP4, PDF, SVG, JPG, PNG and AVIF.\n\nOther file types are deliverable via Code Bus, or may need to be delivered through 3rd party systems.\n\nResponse Payloads\n\nThe total response payload may not exceed 6MB (compressed). This is especially relevant for large JSON resources like metadata sheets, or query indices which can grow automatically with published content. Ensure using the limit and/or offset query parameters when requesting potentially large JSON resources. See Spreadsheets and JSON for more information.\n\nRate Limits (number of requests from single IP address per time)\n\nIn order to protect against abuse and to ensure system stability and fair resource usage the number of requests made from a single IP address to a specific aem.live or aem.page host name is limited to 200 requests per second. Requests exceeding this limit will receive a 429 status code response.\n\nContent Source Limits\n\nThe following limits below are applied on preview operations for content that is ingested from Sharepoint or Google Drive. If exceeded most of them produce an error in sidekick for authors.\n\nFile Size Limits\n\nSupport for uploads of files from the content sources is provided for the following file types with the following limits.\n\nFile type\t Limit \n Document (.docx / gdoc)\t 100 MB \n Spreadsheets\t 100k rows, 500k cells.\nCharacter limit per cell: 32K (Microsoft Excel) or 50K (Google Sheets) \n Videos (.mp4)\t 2 minutes, 300 KB/s \n PDF\t 20 MB \n SVG\t 40 KB \n Images (.png, .jpg, .avif)\t 20 MB \n Fav Icons (.ico)\t 16 KB\n\nOther file types may be uploaded through code (eg. through GitHub) or depending on the use case created on the client-side (eg. .ics, etc.).\n\nIndexing Limits\n\nAn individual index cannot grow beyond 50k pages indexed.\n\nSitemap Limits\n\nAn individual sitemap file cannot grow beyond 50k pages or 50MB\n\nRedirect Limits\n\nThe number of redirects per site cannot exceed 100k.\n\nNumber of pages per site\n\nAn individual site should not grow beyond 1 million pages. It is advisable to split up a large site into smaller ones matching business units or markets. A repoless setup allows you to reuse the same code base for multiple sites.\n\nRate Limits (number of preview operations per time)\n\nRate limits are inferred from the underlying content source (eg. Sharepoint and Google Drive). Throttled requests will receive a response with a 503 status code and an x-error HTTP header containing (429) followed by the message from the backend.\n\nNote, that because of a limit inferred from Cloudflare’s R2 storage, the number of preview operations on the same resource are limited to 1 per second.\n\nAEM Code Sync/GitHub Limits\n\nTo keep the project size manageable we limit the number and size of files that we synchronize from your GitHub repository, as well as the number of active branches/refs. If you have more files in your repo use .hlxignore to avoid syncing them.\n\nDimension\t Limit \n Number of files per ref\t 500 \n Number of active refs\t 100 \n Total size of ref\t 10 MB \n ref--repo--owner\t 63 characters \n Default branch\t main\nGitHub Naming Restrictions\n\nGitHub names including owner, repository and ref may only contain the following characters:\n\nalphabetical (a-z)\nnumerical (0-9)\nsimple dash (-)\nRate Limits (number of deploy operations per time)\n\nRate limits are inferred from GitHub.\n\nBYOM Content Source Limits\n\nThe following limits below are applied on preview operations for content that is ingested via a bring your own content source. If exceeded most of them produce an error in sidekick for authors.\n\nType\t Limit \n Source size (.html)\t 1 MB \n Response Time\t 10 seconds* \n Number of unique images\t 200 \n Image fetch response time\t 5 seconds* \n Size of images\t 20mb\n\n* Please note that the overall response time for fetching the html and the images must not exceed 25 seconds.\n\nThe admin service observes the following per-site limits when sending requests to the content source:\n\nType\t Limit \n Maximum concurrency\t 100 \n Maximum requests per minute\t 600\nAdmin API Limits\n\nThe admin API is rate limited to 10 requests per second, per project for all operations. Throttled requests will receive a response with a 429 status code.\n\nUse the respective bulk APIs for operating on large amounts of resources.\n\nNote that using async jobs via the bulk API is limited to 500 pending jobs per topic.","lastModified":"1773041874","labs":""},{"path":"/docs/lifecycle","title":"Feature Lifecycle","image":"/docs/media_1f621bffc514132d811a1e4864f783d8028af40ec.png?width=1200&format=pjpg&optimize=medium","description":"Simplicity and productivity guide everything we do. To stay lean and fast, we’ve redefined our approach—focusing only on features that customers actively use.","content":"style\ncontent\n\nFeature Lifecycle\n\nSimplicity and productivity guide everything we do. To stay lean and fast, we’ve redefined our approach—focusing only on features that customers actively use.\n\nWhat does this look like?\n\nEdge Delivery Services follows a feature lifecycle, not a traditional roadmap. When our existing features don’t meet a business need, we collaborate with customers to develop a solution. If it gains traction across multiple customers, we productize it and monitor usage. When a feature is no longer actively used, we replace it with a better solution.\n\nFeature Lifecycle Stages:\n\nMap of Interest – A collection of business needs that cannot yet be met with our existing features.\nAdobe-led Implementation – Solutions developed in collaboration with customers to address a Map of Interest need.\nProduct Feature – A fully productized feature, proven in real-world usage, and available as part of the product offering.\nDeprecation – Features no longer recommended to be used because they lack justified usage or general usefulness.\nRemoval – Deprecated features are removed in favor of better alternatives.\n\nThis approach ensures Edge Delivery Services remains agile, customer-driven, and optimized for real-world needs.\n\nhttps://www.aem.live/docs/featurelifecycle.svg\n\nMap of Interest\n\nThrough active collaboration and discussion with customers, our Map of Interest is populated. It represents a list of needs that are under consideration for solution development in partnership with a customer team.\n\n“Bring your own Git” support for Bitbucket, Gitlab, GitHub Enterprise, and Azure DevOps\nReviews and snapshots\nFine-grained content permissions\nStreamlined product catalog integration\nIntuitive multi-origin endpoint configuration\nAEM Authoring with Universal Editor\n\nWhen using AEM Authoring with Universal Editor as your content source for your Edge Delivery Services project, most Sites features are available. For example, nearly any action available in the Sites console is applicable to Edge Delivery Services.\n\nHowever, some features of the Sites console are either not yet or only partially available for Edge Delivery Services. For this reason, such features may be presented differently than their Sites counterparts or there may be alternative solutions for the use case. If your project requires one of the following features, please review the alternatives suggested below and reach out to Adobe to work together to understand your use case.\n\nSites feature\nStatus on edge\nNotes\nMSM, Language Copy, and Launches\n\nAvailable\n\n(documentation)\n\nInheritance can be reverted at the page level or at the component level with a Universal Editor Extension\nPage templates\n\nPartially available\n\n(documentation)\n\nPages created from templates are independent copies of the original template.\nContext Hub and targeting\nNot available\nTimewarp\nNot available\nAssociated content\nNot available\nExperience Fragments\nAlternative\nCreate a page and use a fragment component\nRelease history\n\nWe roll out new releases for AEM components and services on a continuous basis to deploy enhancements and address issues. See the Recent Releases for a detailed change list by component. If you would like to get more information on a particular release, please reach out.\n\nEarly-access technology\n\nRecently added features now available for customer use. These are features that still may undergo significant changes, require higher-than-usual levels of support, or are not widely adopted yet.\n\nExperimentation\n\nExperimentation is the practice of making your site more effective by changing content or functionality, comparing the results with the prior version, and picking the improvements that have measurable effects.\n\nAdobe Experience Manager Assets Sidekick Plugin\n\nWith the Experience Manager Assets Sidekick plugin, you can use assets from your Experience Manager Assets repository while authoring documents in Microsoft Word or Google Docs.\n\nConfiguring Adobe Target Integration\n\nThis article will walk you through the steps of setting up an integration with Adobe Target so you can personalize your pages via the Adobe Target Visual Experience Composer (VEC).\n\nProduct Bus / Product Pipeline\n\nSimple way to manage / update product information with a https://schema.org/Product inspired JSON structure. Produces Feeds, Sitemaps, Indexes as well as markup for Product Detail Pages (PDP).\n\nConfiguring Adobe Experience Cloud Integration\n\nThis article will walk you through the steps of setting up an integration with the Adobe Marketing Technology stack. The stack combines Adobe Experience Platform WebSDK, Adobe Analytics, Adobe Target or Adobe Journey Optimizer, Adobe Client Data Layer and Adobe Experience Platform Tags.\n\nWeb Components\n\nWeb Components are a collection of web standards that allow the creation and use of reusable, modular functionality in web sites and web apps.\n\nSnapshots and Reviews\n\nA new feature to support the concept of publishing a set of content (dozens or hundreds of pages) usually for a launch of an initiative or event.\n\njson2html\n\nAn OOTB overlay to create dynamic server-side rendered Edge Delivery friendly HTML pages out of JSON data from any endpoint.\n\nPublishing AEM Content Fragments\n\nAEM content fragments can be published to Edge Delivery Services as semantic HTML, improving SEO / discoverability, and fast content delivery.\n\nDeprecations and Removals\n\nFeatures that have been deprecated or removed from the product based on lack of justified usage or significant misuse of a feature that leads to undesirable results. See Deprecations and Removals for features slated for removal.\n\nQuestions or Ideas?\n\nPlease contact the community via Discord.","lastModified":"1770360996","labs":""},{"path":"/docs/sidekick-errors","title":"AEM Sidekick Errors","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Error messages in AEM Sidekick","content":"style\ncontent\n\nAEM Sidekick Errors\n\nIn case of a problem, AEM Sidekick will display an appropriate error message. The following table contains all possible error messages and their likely root causes.\n\nFor more information on backend error codes and error message templates from the Admin API, see Admin Errors.\n\nMessage\t Likely Root Cause \n File not found. Source document either missing or not shared with AEM.\t The source file is missing or not properly shared with AEM. \n Unable to fetch $1 from $2.\t The backend was unable to retrieve the requested resource. \n Unable to preview $1: empty file\t The file intended for preview is empty. \n Unable to preview $1: file larger than 100MB\t The file exceeds the maximum allowed size for preview. \n Unable to preview $1: ICO is larger than 16KB\t The ICO file size exceeds the allowed limit. \n Unable to preview $2: invalid JSON\t The JSON data is malformed or not parsable. \n Unable to preview $1: MP4 bit rate is higher than 300KB/s\t The video’s bitrate is too high, exceeding the processing limits. \n Unable to preview $1: MP4 is longer than 2 minutes\t The MP4 video exceeds the allowed duration for preview. \n Unable to preview $1: expected $2 but found $3\t There is a mismatch between the expected media type and the actual file type provided. \n Unable to preview $1: content type header is missing\t The server response is missing the Content-Type header, leading to processing issues. \n Unable to preview $1: no handler found\t No available handler exists to process the given file type. \n Unable to preview $1: SVG contains invalid XML\t The SVG file has invalid XML, making it unprocessable. \n Unable to preview $1: SVG root item missing\t The SVG file is malformed, lacking the required root element. \n Unable to preview $1: illegal scripting detected in SVG\t The SVG file contains embedded scripts that are disallowed for security reasons. \n Unable to preview $1: file type not supported\t The provided file type is not supported by the preview handler. \n Unable to preview $2: file type not supported\t The media file is of a type that cannot be previewed by the system. \n Unable to preview $1: invalid MP4\t The MP4 file is invalid or could not be parsed. \n Unable to preview $1: image is larger than 10MB\t The image exceeds the allowed size limit. \n Unable to preview $1: contained image larger than 10MB\t An embedded image inside the document exceeds the allowed size. \n Unable to preview $1: $2\t A validation error occurred while attempting to preview the file. \n Bulk operation failed. Please try again later.\t The bulk operation encountered a general failure, possibly due to server issues. \n Files can only contain normalized, small latin letters, digits and hyphens. This file contains illegal characters: $1\t The file name includes characters that are not permitted. \n Files can only contain normalized, small latin letters, digits and hyphens. The following files contain illegal characters: $1\t One or more file names include characters that are not permitted. \n Folders can only contain normalized, small latin letters, digits and hyphens. This folder contains illegal characters: $1\t The folder name includes characters that are not permitted. \n You need to sign in to generate the preview of more than 100 files\t The user is not signed in and has exceeded the preview limit. \n You need to sign in to publish more than 100 files\t The user is not signed in and has exceeded the publication limit. \n Caution: unexpected side effects!\t This warns the user that an action might have unforeseen consequences. \n Legacy sidekick not found. If the extension is installed but currently disabled, please enable it and try again.\t The legacy AEM Sidekick extension is either missing or disabled. \n This file no longer exists in the repository, deleting it cannot be undone! Are you sure you want to delete it?\t The file is missing from the repository, and deletion is irreversible. \n This page no longer has a source document, deleting it cannot be undone! Are you sure you want to delete it?\t The page is missing its source document, and deletion is irreversible. \n Are you sure you want to $1 this?\t This is a generic prompt to confirm a destructive action. \n Sorry, please enter the text exactly as displayed to confirm.\t The entered confirmation text did not match the expected text. \n Type $1 to confirm\t A prompt asking the user to type a specific confirmation string. \n Too many requests: Your project is being throttled by $1. Please wait and try again later.\t The system is rate limiting due to excessive requests. \n Deletion failed. Please try again later.\t The deletion process encountered a server or permission issue. \n Apologies, we seem to be having problems at the moment. Please try again later.\t A general, unexpected internal error occurred on the server. \n An error occurred: $1\t A catch-all error likely triggered by an unforeseen issue. \n Job not found.\t The specified job identifier does not exist or could not be located. \n Sign in aborted.\t The user or system canceled the sign-in process. \n Sign in timed out. Please try again later.\t The sign-in process took too long—possibly due to network or server overload. \n Sign out failed. Please try again later.\t An error occurred during sign-out, possibly due to session or network issues. \n Preview generation failed. Please try again later.\t The system was unable to generate a preview, possibly because of file or backend issues. \n Failed to activate configuration: $1\t There was a problem during configuration activation, possibly due to misconfiguration or missing parameters. \n This is a Microsoft Word document, please convert it to Google Docs first.\t The document format isn’t supported directly for preview; conversion is required. \n This is a Microsoft Excel document, please convert it to Google Sheets first.\t The document format isn’t supported directly for preview; conversion is required. \n Preview generation failed. Must be Google document or spreadsheet.\t The file type is not supported for preview; the system expects a Google Doc/Sheet. \n Publication failed. Please try again later.\t The publication process encountered an error, likely due to server or configuration issues. \n Publication failed, generate preview first.\t A preview is required before publication; an intermediate step is missing. \n Failed to fetch the page status. Please try again later.\t The system couldn’t retrieve the current status, possibly due to network or backend issues. \n Failed to fetch the page status. Make sure this document is being shared with AEM.\t The document may not have been shared properly with AEM, causing a lookup failure. \n Unpublication failed. Please try again later.\t An error occurred during the unpublication process, possibly due to server or permission issues.\n\nSee a complete list of all messages in GitHub.","lastModified":"1757057944","labs":""},{"path":"/docs/snapshots-reviews","title":"Snapshots and Reviews","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"To support the concept of publishing a set of content (dozens or hundreds of pages) usually for a launch of an initiative or event, the ...","content":"style\ncontent\n\nSnapshots and Reviews\n\nTo support the concept of publishing a set of content (dozens or hundreds of pages) usually for a launch of an initiative or event, the concept of content snapshots and reviews has been introduced.\n\nA snapshot is made up of a set of pages (and other resources) that are captured at a particular point in time, when the preview state of a page is added to a snapshot. The snapshot has a name that is often tied to the launch of those pages. To draw comparisons to the version control of code, a snapshot is similar to a branch for content.\n\nThe .aem.reviews environments show what the entire website will look like, when the snapshot is eventually published, and allows for reviews of the overall website, before a release of the content in question.\n\nHow to use Snapshots and Reviews\n\nFor developers the API endpoint for /snapshot can be helpful. See documentation here.\n\nThe easiest way for users to get engaged is to add the following plugin to your sidekick configuration.\n\n \"plugins\": [\n    {\n      \"id\": \"review\",\n      \"title\": \"Review...\",\n      \"environments\": [\n        \"dev\",\n        \"preview\",\n        \"review\"\n      ],\n      \"url\": \"https://tools.aem.live/tools/snapshot-admin/popover.html\",\n      \"isPopover\": true,\n      \"popoverRect\": \"width:400px; height:400px\",\n      \"passConfig\": true,\n      \"passReferrer\": true\n    },\n\n\nLifecycle of a snapshot\n\nUsually a snapshot is created for a launch or an event that requires a set changes to a website. The snapshot is given a name that identifies the event e.g. product-launch-2025 or fall-collection-2025 , then all the pages are edited and previewed, and added to the snapshot.\n\nWhile the snapshot is put together the pages can be reviewed on the corresponding .aem.reviews environment. For example product-launch-2025--main–<site>--<org>.aem.reviews or fall-collection-2025--main–<site>--<org>.aem.reviews.\n\nOnce the set of pages is complete, the snapshot can be locked and the review can be completed. Once the review is completed the snapshot can be published.\n\nThere is a higher level review request and approval (or rejection) that implies locking / unlocking the snapshot as well as publishing when approved.\n\nRequesting a review will lock the snapshot while the review is in progress.\nRejecting a review will unlock the snapshot from review.\nApproving a review will publish the snapshot and remove the URLs from the snapshot\nRelationship of Pages and Snapshots\n\nWhen a page (or a resource in general) is added to a snapshot, a copy of the current state in the content bus is taken and is added to the snapshot. The copy remains immutable, even if the underlying page changes unless the page gets explicitly updated in snapshot. The content contains all the information that was in the underlying document including metadata.\n\nAny page can be a part of multiple snapshots, either at the exact same state or at a different state. If there are snapshots that are going to be published sequentially for different launches or events, the snapshots can contain different states of the same page.\n\nSpecial Mention: Redirects and Bulk Metadata\n\nRedirects and Bulk metadata can be added to snapshots by adding their underlying (spreadsheet) documents to a snapshot. In some projects these files are changing frequently and it might make sense to isolate the redirects or metadata changes to individual files, have a separate publication of metadata and redirects or provide more governance in preparation for a large-scale snapshot.\n\nPermissions\n\nTo view snapshots either directly or via .aem.reviews a user needs to be able to access .aem.page, and to edit a snapshot the author role is needed. To publish a snapshot the publish role is needed.\n\nTools\n\nFrom the above review button in the sidekick you can also get to the snapshot admin user interface at\nhttps://tools.aem.live/tools/snapshot-admin/index.html\n\nThe snapshot admin allows you to add and remove pages in bulk for example from a spreadsheet as well as performing all of the snapshot related operations (lock, unlock, request, reject and approve).","lastModified":"1767867987","labs":"AEM Sites"},{"path":"/developer/ue-tutorial","title":"Setup AEM Sites as a Content Source","image":"/developer/media_1d00989ba18e942fbddc9bb108add01e153029f22.png?width=1200&format=pjpg&optimize=medium","description":"This tutorial will get you up-and-running with a new Adobe Experience Manager (AEM) project, authored in Universal Editor and publishing to Edge Delivery.","content":"style\ncontent\n\nSetup AEM Sites as a Content Source\n\nThis tutorial will get you up-and-running with a new Adobe Experience Manager (AEM) project, authored in Universal Editor and publishing to Edge Delivery.\n\nYou have two ways to get started with the AEM Sites Developer Tutorial. If you want to use your own AEM as a Cloud Service environment (For AEM 6.5+ reach out to us), follow the steps outlined below to get up and running.\n\nAlternatively, for an even faster route, you can jump right in using our pre‑built tutorial environment - fully configured and ready to go. Fill out the form and get started in seconds.\n\nIn about thirty minutes, you will have created your own site and be able to create, preview, and publish your own content, styling, and add new blocks.\n\nPrerequisites:\n\nYou have a GitHub account, and understand Git basics.\nYou have access to an AEM as a Cloud Service environment.\nYou understand the basics of HTML, CSS, and JavaScript.\nYou have Node/npm installed for local development.\n\nThis tutorial uses macOS, Chrome, and Visual Studio Code as the development environment and the screenshots and instructions reflect that setup. You can use a different operating system, browser, and code editor, but the UI you see and steps you must take may vary accordingly.\n\nIf you are looking for a headless solution using the Universal Editor, check out the SecurBank sample app.\n\nIf you are looking for a Adobe Commerce Storefront and you don’t have access to AEM as a Cloud Service, use the Adobe Commerce Site Creator instead.\n\nUse the project boilerplate to create your code repository\n\nThe fastest and easiest way to get started following AEM best practices is to create your repository using the Boilerplate GitHub repository as a template.\n\nThis tutorial uses the standard AEM project boilerplate, which is the best solution for many projects. However you can also use the Commerce project boilerplate, if your AEM Authoring with Edge Delivery Services project needs to integrate with Adobe Commerce. The steps remain the same.\n\nNavigate to the GitHub page of the boilerplate appropriate for your project.\nFor most projects: https://github.com/adobe-rnd/aem-boilerplate-xwalk\nFor projects that integrate with Adobe Commerce: https://github.com/adobe-rnd/aem-boilerplate-xcom\nClick on Use this template and select Create a new repository.\nYou will need to be signed in to GitHub to see this option.\n\nBy default, the repository will be assigned to you. Adobe recommends leaving the repository set to Public. Provide a repository name and description and click Create repository.\n\nConnect your code to your content\n\nNow that you have your GitHub project, you need to link the repository to your AEM authoring instance.\n\nIn your new GitHub project, click the fstab.yaml file to open it and then the Edit this file icon to edit it.\nEdit the fstab.yaml file to update the mount point of your project. Replace the default Google Docs URL with the URL of your AEM as a Cloud Service authoring instance and then click Commit changes….\n\nhttps://<aem-author>/bin/franklin.delivery/<owner>/<repository>/main\nChanging the mount point tells Edge Delivery Services where to find the content of the site.\nEdge Delivery Services will always reference the fstab.yaml from your main branch.\nAdd a commit message as desired and then click Commit changes, committing them directly to the main branch.\n\nReturn to the root of your repository and click on paths.json and then the Edit this file icon.\n\nThe default mapping will use the name of the repository. Update the default mappings as required for your project, for example /content/<site-name>/:/ and similar, and click Commit changes….\nProvide your own <site-name>. You will need it in a later step.\nThe mappings tell Edge Delivery Services how to map the content in your AEM repository to the site URL.\n\nAdd a commit message as desired and then click Commit changes, committing them directly to the main branch.\n\nConnect AEM Code Sync bot\n\nThe AEM Code Sync bot listens for changes to your code and updates it in the code bus for high availability. You must enable the bot on your new repository.\n\nIn a new tab in the same browser, navigate to https://github.com/apps/aem-code-sync and click Configure.\n\nClick Configure for the org where you created your new repository in the previous step.\n\nOn the AEM Code Sync GitHub page under Repository access, select Only select repositories, select the repository that you created in the previous step, and then click Save.\n\nOnce AEM Code Sync is installed, you receive a confirmation screen. Return to the browser tab of your new repository.\n\nYou now have your own GitHub repository for developing your own Edge Delivery Services project, based on Adobe’s best-practices boilerplate.\n\nCreate and publish your site\n\nWith your GitHub project set up and linked to your AEM instance, you are ready to create and publish a new AEM site using Edge Delivery Services.\n\nCreate an AEM site\nDownload the latest AEM authoring with Edge Delivery Services site template from GitHub appropriate to your project.\nFor most projects: https://github.com/adobe-rnd/aem-boilerplate-xwalk/releases\nFor projects that integrate with Adobe Commerce: https://github.com/adobe-rnd/aem-boilerplate-xcom/releases\nSign in to your AEM as a Cloud Service authoring instance and navigate to the Sites console and click Create → Site from template.\n\nOn the Select a site template tab of the create site wizard, click the Import button to import a new template.\n\nUpload the AEM authoring with Edge Delivery Services site template that you downloaded from GitHub.\nThe template must only be uploaded once. Once uploaded it can be reused to create additional sites.\nOnce the template is imported, it appears in the wizard. Click to select it and then click Next.\n\nProvide the following fields and tap or click Create.\nSite title - Add a descriptive title for the site.\nSite name - Use the <site-name> that you defined in the previous step.\nGitHub URL - Use the URL of the GitHub project you created in the previous step.\n\nAEM confirms the site creation with a dialog. Click OK to dismiss.\n\nOn the sites console, navigate to the index.html of the newly-created site and click Edit in the toolbar.\n\nThe Universal Editor opens in a new tab. You may need to tap or click Sign in with Adobe to authenticate to edit your page.\n\nYou can now edit your site using the Universal Editor.\n\nPublishing Your New Site to Edge Delivery Services\n\nOnce you are finished editing your new site using the Universal Editor, you can publish your content.\n\nOn the sites console, select all of the pages you created for your new site and tap or click Quick publish in the toolbar.\n\nTap or click Publish in the confirmation dialog to start the process.\n\nOpen a new tab in the same browser and navigate to the URL of your new site.\nhttps://main--<repository-name>--<owner>.aem.page\nSee your content published.\n\nNow that you have a working Edge Delivery Services project with AEM authoring, you can begin customizing it by creating and styling your own blocks.\n\nStart developing styling and functionality\nhttps://main--helix-website--adobe.aem.page/developer/videos/tutorial-step4.mp4\n\nTo get started with development, it is easiest to install the AEM Command Line Interface (CLI) and clone your repo locally through using the following.\n\nnpm install -g @adobe/aem-cli\ngit clone https://github.com/<owner>/<repo>\n\n\n\n\nFrom there change into your project folder and start your local development environment using the following.\n\ncd <repo>\naem up\n\n\n\n\nThis opens http://localhost:3000/ and you are ready to make changes.\nA good place to start is in the blocks folder which is where most of the styling and code lives for a project. Simply make a change in a .css or .js and you should see the changes in your browser immediately.\n\nOnce you are are ready to push your changes, simply use Git to add, commit, and push and your code to your preview (https://<branch>--<repo>--<owner>.aem.page/) and production (https://<branch>--<repo>--<owner>.aem.live/) sites.\n\nThat’s it, you made it! Congrats, your first site is up and running. If you need help in the tutorial, please join our Discord channel or get in touch with us.\n\nPrevious\n\nBuild\n\nUp Next\n\nCreating Blocks for the Universal Editor","lastModified":"1767692248","labs":""},{"path":"/docs/repoless","title":"Repoless - One codebase, many sites","image":"","description":"If you have many similar sites that mostly look and behave the same, but have different content, you may want to share code across multiple ...","content":"style\ncontent\nRepoless - One codebase, many sites\n\nIf you have many similar sites that mostly look and behave the same, but have different content, you may want to share code across multiple sites. In the past, the best way to do this was to create multiple GitHub repositories, keep them somehow in sync, and run each site off a dedicated GitHub repository.\n\nAEM supports running multiple sites from the same codebase without having to take care of code replication. This ability is also known as \"repoless\", because all but your first site don't need a GitHub repository of their own.\n\nFollow this document to learn how to create and manage multiple sites off the same codebase. Make sure you have followed the developer tutorial, as it provides the basics of creating a site on AEM.\n\nHow this works\n\nRepoless sites are enabled by the Configuration Service in Adobe Experience Manager, which introduces a couple of concepts such as Organization, Profile, Repository, Site, and Content Source. The following diagram shows how the pieces fit together.\n\nhttps://main--helix-website--adobe.aem.page/docs/architecture-repoless.svg\n\nOrganization\n\nEach site in Adobe Experience Manager belongs to an organization. This organization has the same name as an org on github.com, so that there is no naming conflict. An organization can have multiple sites, profiles, and users.\n\nProfile\n\nProfiles are a way to group and re-use important configurations such as headers, indexes, additional metadata, and so on. A profile can be used by multiple sites within an organization, so that there is consistency among them.\n\nGitHub Repository\n\nFor each codebase in Adobe Experience Manager, there is one GitHub repository. By creating a first site that uses this repository through AEM Code Sync, this code will be made available to Adobe Experience Manager and can then be used by multiple sites. When updates are pushed to the GitHub repository, they apply to all sites that use this codebase.\n\nSite\n\nA site combines content, code, and configuration to create a new web experience. Configuration can be attached directly to the site, or it can be referenced from a profile. While a profile can be used by multiple sites, each site can have only one profile. When there is a conflict between the configuration settings in a site and in a profile, the site configuration wins.\n\nWhich content source and which codebase to use for a site are configuration settings, too. This enables the re-use of code and repoless sites.\n\nContent Source\n\nThe content that makes up a site is pulled from a content source when authors preview or publish content. Typical sources include Microsoft Sharepoint and Google Drive, or the bring-your-own-markup adapter.\n\nPreparation\nPre-requisites\nGitHub Repository: https://github.com/{owner}/{repo}\n\nNote: If your organization is unable to use GitHub, see our Bring your own Git documentation.\n\nThe first step to creating a bunch of sites sharing the same codebase is to create a first canonical site. This site can be used to serve content, but its most important job is to ensure that code is getting synchronized from your GitHub repository to all sites that use the same codebase.\n\nCreate your first site\n\nIf this is your very first site, the easiest way to create your first site is to follow the steps of the developer tutorial. It only takes a few minutes and will ensure your org and site will be set up in the configuration service for you, using the GitHub owner as org and repo as site name.\n\nIf you already have an aem.live org and other sites configured, you can use the Admin API to create new sites. See below for an example API request.\n\nCreate a repoless site\nPre-requisites\nCanonical site configured with code and content repository\nContent source URL with the content you want to use for the repoless site, e.g. https://content.da.live/{org}/{site2}/\nAccess token for the Admin API\nCreate your first repoless site\n\nUsing the Admin API, you can create a new configuration for the first repoless site. Replace all variables in curly brackets with your actual details:\n\ncurl -X PUT https://admin.hlx.page/config/{org}/sites/{site2}.json \\\n  -H 'x-auth-token: {token}' \\\n  -H 'content-type: application/json' \\\n  --data '{\n  \"code\": {\n    \"owner\": \"{owner}\",\n    \"repo\": \"{repo}\"\n  },\n  \"content\": {\n    \"source\": {\n      \"url\": \"https://content.da.live/{org}/{site2}/\"\n    }\n  }\n}'\n\n\n\nThe new site is instantly available at https://main--{site2}--{org}.aem.page. On first access, most likely only the 404.html will be shown, because no content has been previewed yet. You can preview content directly from the content source, or using the AEM Sidekick.\n\nNow you have multiple sites with different content repositories but sharing the same code base.\n\nTo load your new site during local development, use the ––url option on the aem up command to specify the new site’s url, like:\n\naem up ––url https://main––{site2}--{org}.aem.page\n\nWhat’s not needed once you go repoless\n\nWhen you start with Adobe Experience Manager, many settings are kept either with your content in the content source, or in the GitHub repository next to your code. With the Configuration Service there is now a central place for these kinds of settings.\n\nBelow gives you a comparison between how different areas are configured when we use the document mode (the traditional, distributed configuration) vs. using the Configuration Service API.\n\nWhat\t Document Mode\t API Mode Content Source\t fstab.yaml\t ContentConfig \n Folder Mappings\t fstab.yaml\t FoldersConfig \n robots.txt\t robots.txt in Github\t RobotsConfig \n CDN\t .helix/config.xlsx\t CDNConfig \n Headers\t .helix/headers.xlsx\t HeadersConfig \n Additional Metadata\t .helix/config.xlsx\t MetadataConfig \n Access\t .helix/config.xlsx\t AccessConfig \n Sidekick Plugins\t tools/sidekick/config.json\t SidekickConfig \n Index Definitions\t helix-query.yaml in GitHub\t IndexConfig \n Sitemap Definition\t helix-sitemap.yaml in GitHub\t SitemapConfig\n\nPrevious\n\nDeveloper Tutorial\n\nUp Next\n\nConfig Service Setup","lastModified":"1769179329","labs":""},{"path":"/developer/byom","title":"Bring Your Own Markup","image":"/developer/media_163649ea05f49d30d779c990171c0a271f31bab3b.png?width=1200&format=pjpg&optimize=medium","description":"Edge Delivery Services is independent of the authoring tooling and supports multiple content sources. This means you could provide your own content source and publish ...","content":"style\ncontent\n\nBring Your Own Markup\n\nEdge Delivery Services is independent of the authoring tooling and supports multiple content sources. This means you could provide your own content source and publish the content from a repository you already have to AEM without having to migrate the content first.\n\nThe API that enables it is called Bring Your Own Markup (BYOM). It uses HTML as the standard data format.\n\nBring Your Own Markup is widely adopted and two of Adobe's own content sources already implement it. BYOM is generic and easy to implement; it can be used for any content source.\n\nWhat format should the source bring to the table\n\nThe data format for BYOM is HTML - in fact, it is the same semantic HTML structure used by Edge Delivery Services when rendering your website.\n\nFor each page that is previewed, your BYOM service must return an HTML response. This means that the HTML must follow the semantics of sections, blocks and default content.\n\nA very minimal example of this looks like:\n\n<!DOCTYPE html>\n<html>\n  <head>\n    <title>Home</title>\n  </head>\n  <body>\n    <header></header>\n    <main>\n      <div>\n        <div class=\"hero\">\n          <div>\n            <div>\n              <p>\n                <picture>\n                  <img loading=\"lazy\" alt=\"\" src=\"myimage.jpg\">\n                </picture>\n              </p>\n              <h1>Hello World</h1>\n            </div>\n          </div>\n        </div>\n        <p>Welcome to your website.</p>\n        <div class=\"metadata\">\n          <div>\n            <div>\n              <p>Description</p>\n            </div>\n            <div>\n              <p>Page Description</p>\n            </div>\n          </div>\n        </div>\n      </div>\n    </main>\n    <footer></footer>\n  </body>\n</html>\n\n\nExtra elements such as span tags, data attributes, or styles are removed when content is ingested. Page metadata can be provided via the common meta tags in the HTML head or via a metadata block with the page content.\n\nImage source URLs within the content must be accessible by Edge Delivery Services. The images will be downloaded and ingested during the preview of the page. Image source can be an absolute URL or relative to the page.\n\nBlock class names\n\nBlock class names can only contain alphanumeric characters and single dashes and may not start with a digit. Underscores or double dashes are not supported. A block can have multiple classes. See also block options and block decoration.\n\nSupported: hero wide or hero hero-wide\nNot supported: hero hero_wide or hero hero--wide\nWhat about sheets?\n\nStructured data in the form of spreadsheets can be provided via BOYM as well. The data is provided as a JSON. The format of the JSON has to follow the Edge Delivery Services sheet format.\n\nSetup BYOM as primary content source\n\nThe configuration of BYOM as the primary content source is carried out as for all projects either via fstab.yaml or via the configuration service..\n\nA file based setup will use an fstab.yaml like this:\n\nmountpoints:\n  /:\n    url: \"https://content-service.acme.com/data\"\n    type: \"markup\"\n    suffix: \".html\"\n\n\nIf the site is set up using the configuration service the site config can be created like this:\n\ncurl -X POST https://admin.hlx.page/config/acme/sites/website.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: <your-auth-token>' \\\n  --data '{\n  \"version\": 1,\n  \"code\": {\n    \"owner\": \"acme\",\n    \"repo\": \"website\"\n  },\n  \"content\": {\n    \"source\": {\n      \"url\": \"https://content-service.acme.com/data\",\n      \"type\": \"markup\",\n      \"suffix\": \".html\"\n    }\n  }\n}'\n\n\nThe path of the page being published is appended to the provided BYOM source URL.\n\nIn both configuration options the \"type\": \"markup\" must be present to indicate this is a BYOM content source. \"suffix\": \".html\" is an optional configuration, the admin service will add the suffix when requesting the content from the BYOM service URL.\n\nHow markup URLs are constructed\n\nWhen a page is previewed with a BYOM markup source configured, the admin service constructs the request URL with the format {url.origin}{url.path}{contentPath}{suffix}{url.params}.\n\nWhere:\n\nurl.origin - The origin from the source configuration\nurl.path - The path from the source configuration\ncontentPath - The path of the page being requested\nsuffix - The optional suffix from the configuration\nurl.params - Any query parameters from the source configuration\n\nFor example, given this BYOM configuration:\n\n\"source\": {\n  \"url\": \"https://content-service.acme.com/data?foo=bar\",\n  \"type\": \"markup\",\n  \"suffix\": \".html\"\n}\n\n\nWhen previewing a page at /products/widget the constructed url will be https://content-service.acme.com/data/products/widget.html?foo=bar.\n\nSetup BYOM as content overlay\n\nBYOM can also be used as an overlay for another content source. It does not matter whether the primary content source is SharePoint, Google Drive or BYOM. However, the overlay content source must always be BYOM. A typical use case for such a setup is, for example, the automatic publishing of product pages directly from the commerce backend.\n\nTo use the content overlay the site must use the configuration service. A file based setup via fstab.yaml is not supported.\n\ncurl -X POST https://admin.hlx.page/config/acme/sites/website.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: <your-auth-token>' \\\n  --data '{\n  \"version\": 1,\n  \"code\": {\n    \"owner\": \"acme\",\n    \"repo\": \"website\"\n  },\n  \"content\": {\n    \"source\": {\n      \"url\": \"https://acme.sharepoint.com/sites/aem/Shared%20Documents/website\"\n    },\n    \"overlay\": {\n      \"url\": \"https://content-service.acme.com/data\",\n      \"type\": \"markup\"\n    }\n  }\n}'","lastModified":"1773661125","labs":""},{"path":"/docs/operational-telemetry","title":"Operational Telemetry","image":"/docs/media_16b177a242f62f12a8bde010bb080b4463d6469ab.png?width=1200&format=pjpg&optimize=medium","description":"Adobe Experience Manager uses Operational Telemetry to gather operations data that is strictly necessary to discover and fix functional and performance issues on Adobe Experience ...","content":"style\ncontent\n\nOperational Telemetry\n\nAdobe Experience Manager uses Operational Telemetry to gather operations data that is strictly necessary to discover and fix functional and performance issues on Adobe Experience Manager-powered sites. Operational Telemetry data can be used to diagnose performance issues. Operational Telemetry preserves the privacy of visitors through sampling (only a small portion of all page views will be monitored).\n\nPrivacy\n\nOperational Telemetry in Adobe Experience Manager is designed to preserve visitor privacy and minimize data collection. As a visitor, this means that Adobe will not attempt to collect personal information about you or information that can be tracked back to you. As a site operator, review the data items collected below to understand if they require consent.\nAEM Operational Telemetry does not use any client-side state or ID, such as cookies or localStorage, sessionStorage or similar, to collect usage metrics. Data is submitted transparently through a Navigator.sendBeacon call, not through pixels or similar techniques. There is no “fingerprinting” of devices or individuals via their IP address, User Agent string, or any other data for the purpose of capturing sampled data.\n\nIt is not permitted to add any personal data into the Operational Telemetry data collection nor may Operational Telemetry data be used for use cases that go beyond strictly necessary.\n\nOperational Telemetry data is sampled\n\nTraditional web analytics solutions try to collect every single visitor. Adobe Experience Manager’s Operational Telemetry only captures information from a small fraction of activities tied to page views, with no concept of identifying a visitor or a user or even a browser session. Under normal circumstances the sampling rate is one out of one hundred page views , although site operators can decide to increase or decrease this number.\n\nAs the decision if data will be collected is made on a page view by page view basis, it cannot be used to track interactions across multiple pages. Operational Telemetry has no concept of visits, visitors, or sessions, only checkpoints during a page view. This is by design.\n\nWhat data is being collected\n\nOperational Telemetry is designed to prevent the collection of personally identifiable information. The full set of information that can be collected by Adobe Experience Manager’s Operational Telemetry is:\n\nThe host name of the site being visited, such as www.aem.live\nThe host name of the server responsible for the data collection such as rum.hlx.live\nThe user agent (technical name of the browser) that is used to display the page such as Mozilla/5.0 (iPhone; CPU iPhone OS 14_4_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Mobile/15E148 Safari/604.1. This string is then simplified to desktop, desktop:windows, desktop:mac, desktop:linux, mobile, mobile:android, mobile:ios, mobile:ipados, or bot so that only the device class is stored.\nThe time of the data collection such as 2021-06-26 06:00:02.596000 UTC (in order to preserve privacy, we round all minutes to the previous hour, so that only seconds and milliseconds are tracked)\nThe URL of the page being visited, such as https://www.aem.live/docs/operational-telemetry if the URL contains URL parameters, these parameters will not be collected or stored\nThe Referrer URL (the URL of the page that linked to the current page) such as https://www.aem.live/docs\nA randomly generated ID of the page view such as 2Ac6\nThe weight or inverse of the sampling rate such as 100 (this means only one in one hundred page views will be recorded)\nThe checkpoint, or name of a particular event in the sequence of loading the page or interacting with it as a visitor such as viewmedia (this particular checkpoint is fired when an image becomes at least 25% visible in the browser)\nThe source, or identifier of the DOM element that the received the interaction event with for the checkpoint mentioned above such as .images\nThe target, or link to an external page or resource that received and interaction with for the checkpoint mentioned above such as https://blog.adobe.com/jp/publish/2022/06/29/media_162fb947c7219d0537cce36adf22315d64fb86e94.png\nThe Core Web Vitals (CWV) performance metrics Largest Contentful Paint (LCP), Interaction to Next Paint (INP) and Cumulative Layout Shift (CLS) that describe the visitor’s quality of experience.\nThe foundational performance metric Time to First Byte (TTFB) that describes the visitors's quality of experience\n\nNo other data is being collected.\n\nWhat data is being stored in the visitor's browser\n\nFor sites that use the built-in experimentation feature, the name of the experiment and variants that the visitor has seen are also stored in the browser's session storage.\n\nHow Operational Telemetry data is being used\n\nAdobe uses Operational Telemetry data for following purposes:\n\nTo identify and fix performance bottlenecks on customer sites\nTo estimate the number of page views to customer sites\nTo understand how Adobe Experience Manager interacts with other scripts (such as analytics, targeting, or external libraries) on the same page to increase compatibility\n\nThese use cases are strictly necessary to ensure sites running on AEM are working for their visitors.\n\nData Overview\nData\t Stored in browser\t Sent to data collection\t Persisted\t Usable for identification/fingerprinting\t Strictly neccessary \n Site host name\t no\t yes\t yes\t no\t yes \n Data collection server\t no\t no\t yes\t no\t yes \n Client IP address\t no\t optionally\t never\t yes\t no \n User agent\t no\t yes\t masked\t yes, when unmasked\t no \n Timestamp\t no\t yes\t masked\t yes, when unmasked\t yes \n Full URL of page visited\t no\t no\t no\t yes\t no \n URL of page visited, without URL parameters\t no\t yes\t yes\t no\t yes \n Referrer URL (without URL parameters)\t no\t yes\t yes\t no\t yes \n Page View ID\t yes, for the duration of the page view\t yes\t yes\t no\t yes \n Weight (Sampling Rate)\t yes, for the duration of the page view\t yes\t yes\t no\t yes \n Checkpoint\t no\t yes\t yes\t no\t yes \n Source\t no\t yes\t yes\t no\t yes \n Target\t no\t yes\t yes\t no\t yes \n LCP\t no\t yes\t yes\t no\t yes \n FID\t no\t deprecated\t no\t no\t yes \n CLS\t no\t yes\t yes\t no\t yes \n INP\t no\t yes\t yes\t no\t yes \n TTFB\t no\t yes\t yes\t no\t yes \n Experiment variants (only when using Experimentation)\t yes, for the duration of the session\t yes\t yes\t no\t yes\n\nThe data collected and stored is designed to prevent:\n\nIdentification of individual visitors or devices\nFingerprinting\nTracking of visits or sessions\nEnrichment or combination with personal identifiable information from other sources\n\nOther than Adobe, following third parties are involved in the collection of Operational Telemetry data:\n\nFastly, Inc\nCloudflare, Inc\nCoralogix LTD\nGoogle LLC\nAmazon.com, Inc\nManaging Operational Telemetry\n\nAdobe Experience Manager as a Cloud Service (outside of Edge Delivery Services) provides environment variables to control Operational Telemetry behavior through Cloud Manager configuration.\n\nDisabling Operational Telemetry\n\nAdobe recommends maintaining Operational Telemetry for its performance and optimization, allowing Adobe to help improve your digital experiences through insights and data, yet you retain control and command over this feature. The service operates with seamlessness and silence, imposing no burden on website performance.\n\nTo disable Operational Telemetry, set an environment variable in Cloud Manager:\n\nVariable name: AEM_OPTEL_DISABLED\nValue: true\n\nTo re-enable Operational Telemetry, remove this environment variable.\n\nNote that disabling telemetry means forgoing opportunities to identify performance bottlenecks and optimize visitor engagement based on real-world usage patterns.\n\nContent Security Policy with Nonce Support\n\nFor sites implementing Content Security Policy (CSP) with nonce-based script validation, Operational Telemetry offers experimental support for CSP nonces.\n\nTo enable nonce support, set an environment variable in Cloud Manager:\n\nVariable name: AEM_OPTEL_NONCE\nValue: true\n\nTo disable nonce support, remove this environment variable.\n\nThis feature remains experimental. For any issues with CSP nonce implementation, contact Adobe Support directly.\n\nOperational Telemetry for Developers\n\nWe have additional in-depth information for developers that want to use data to optimize their own sites, including instructions on how to add Operational Telemetry instrumentation to your site, even if it's not running on AEM.\n\nUp Next\n\nExplorer","lastModified":"1756821914","labs":""},{"path":"/docs/aem-authoring","title":"Authoring with AEM Sites for Edge Delivery Services","image":"/docs/media_1b57584b3464df56562d358f127780590772dead0.png?width=1200&format=pjpg&optimize=medium","description":"Authoring and persisting content in AEM as a Cloud Service using the Universal Editor, you benefit from the power of AEM’s robust tool set for ...","content":"style\ncontent\n\nAuthoring with AEM Sites for Edge Delivery Services\n\nAuthoring and persisting content in AEM as a Cloud Service using the Universal Editor, you benefit from the power of AEM’s robust tool set for managing your content such as multi-site, management, localization workflows, and launches, while at the same time your pages are delivered with unparalleled performance by Edge Delivery Services.\n\nAuthoring Workflow\n\nWhen using AEM as a Cloud Service for managing your content, the Universal Editor and Edge Delivery Services work together for a seamless authoring experience.\n\nhttps://www.aem.live/docs/aem-authoring.svg\n\nThe AEM Sites console is used for content management such as creating new pages, Experience Fragments, Content Fragments, etc.\nAll features of AEM are available such as workflows, MSM, translation, Launches, etc.\nThe Universal Editor is used to author the content managed in AEM.\nThe Universal Editor offers a new and modern UI for content authoring.\nAEM renders the HTML but includes the scripts, styles, icons and other resources from Edge Delivery Services.\nAll changes are persisted to AEM as a Cloud Service.\nContent that you author with the Universal Editor and persist to AEM is published to Edge Delivery Services.\nAEM renders semantic HTML that is needed for ingestion by Edge Delivery Services.\nContent is published to Edge Delivery Services.\nEdge Delivery Services ensure a 100% core web vitals score.\nPage Structure\n\nWhen authoring content using AEM and the Universal Editor, the same concepts of page blocks and sections used when authoring document-based content are used to structure your pages.\n\nBlocks are fundamental components of a page delivered by Edge Delivery Services. Authors can choose from default blocks provided as standard by Adobe or from blocks customized for your project by your developers.\n\nThe Universal Editor provides a modern and intuitive GUI for authoring your content by adding and arranging blocks, which the Universal Editor refers to as components.\n\nDetails of the components can then be configured in the Properties panel.\n\nFirst Steps\n\nIf you wish to get started authoring content using AEM and the Universal Editor with Edge Delivery Services, please see the following documents.\n\nGetting Started – Universal Editor Developer Tutorial\n\nGet you up-and-running with a new project using the Universal Editor and AEM for authoring.\n\nCreating Blocks for use with the Universal Editor\n\nLearn how to create blocks instrumented for the Universal Editor including definitions, block decoration, and styles.\n\nPath mapping\n\nTo use AEM authoring as your content source, you need to set up your project’s path mapping.\n\nManaging tabular data\n\nManage tabular data using an intuitive tool: the spreadsheet.\n\nPublishing pages with AEM Assets\n\nPublish assets from AEM Assets along with content you edit in AEM to Edge Delivery Services.\n\nAdvanced Usage\n\nDepending on your project needs, you might like to investigate some more advanced features.\n\nManaging taxonomy data\n\nAEM allows you to create a rich taxonomy of tags to organize your pages.\n\nReusing code across sites\n\nIf you have many similar sites that mostly look and behave the same, but have different content, you may want to share code across multiple sites.\n\nMulti site management\n\nYou can use multi site management to create an entire content structure for your brand across locales and languages, authoring the content centrally.\n\nRepoless stage and prod environments\n\nLearn how to set up a site for your production environment separate from your staging environment.\n\nConfiguration templates\n\nLearn how to use the Sites console to easily create and manage your project configuration by using a configuration template.\n\nContent Fragments from other AEM instances\n\nContent Fragments from AEM instances without Edge Delivery Services can be integrated into Edge Delivery Services using this Early Access technology.","lastModified":"1758011391","labs":""},{"path":"/docs/publishing-from-authoring","title":"How content is published from AEM Sites authoring to Edge Delivery Services","image":"/docs/media_19b09e39d2a7dbe209e6fcefbc9dd761795750f6a.png?width=1200&format=pjpg&optimize=medium","description":"When using the Universal Editor to author AEM content, publishing is as simple as clicking the Publish button in the Universal Editor. Please see the ...","content":"style\ncontent\n\nHow content is published from AEM Sites authoring to Edge Delivery Services\n\nWhen using the Universal Editor to author AEM content, publishing is as simple as clicking the Publish button in the Universal Editor. Please see the document Publishing Content with the Universal Editor.\n\nThe flow of information when publishing is as follows. Once the author starts publication, this flow is automatic and is illustrated here for information purposes.\n\nNOTE: Up to a maximum of 5000 paths published from the authoring UI or by workflows are permitted per day. Integrations that create bulk-publication work loads are not supported. If your project requires higher capacity, please propose it for the VIP Program.\n\nhttps://www.aem.live/docs/publishing-from-authoring.svg\n\nThe content author publishes AEM content in the Universal Editor.\nA publish event is pushed to the Adobe pipeline queue.\nThe Edge Delivery Services publish service forwards the relevant events to Edge Delivery Services admin API.\nEdge Delivery pulls and ingests semantic HTML from AEM author.\nAEM is updated with publish status.\n\nBy default, the Edge Delivery Services admin API is not protected and can be used to publish or unpublish documents without authentication. In order to configure authentication for the admin API as documented in Configuring Authentication for Authors, your project must be provisioned with an API_KEY, which grants access to the publish service. Please reach out to the Adobe team on Slack for guidance.\n\nPrevious\n\nUnsupported","lastModified":"1744375420","labs":""},{"path":"/developer/universal-editor-blocks","title":"Creating Blocks Instrumented for use with the Universal Editor","image":"/developer/media_16452698b3eab5476eb5ac38bcfd3fd4a63756af6.png?width=1200&format=pjpg&optimize=medium","description":"Learn how to create blocks instrumented for the Universal Editor when using AEM authoring as your content source by adding components, loading component definitions in ...","content":"style\ncontent\n\nCreating Blocks Instrumented for use with the Universal Editor\n\nLearn how to create blocks instrumented for the Universal Editor when using AEM authoring as your content source by adding components, loading component definitions in the Universal Editor, publishing pages, implementing block decoration and styles, bringing the changes to production, and verifying them.\n\nPrerequisites\n\nTo create your blocks, you will need some existing knowledge of AEM authoring with Edge Delivery Services projects as well as the Universal Editor. You should also already have access to Edge Delivery Services and be familiar with its basics including:\n\nYou have access to an AEM Cloud Service sandbox.\nYou have completed the Getting Started – Universal Editor Developer Tutorial.\nAdding a New Block to Your Project\n\nLet’s build a block to render a memorable quote on your page.\n\nTo simplify this example, all changes are made to the main branch of the project repository. Of course for your actual project, you should follow development best practices by developing on a different branch and reviewing all changes via pull request before merging to main.\n\nAdobe recommends that you develop blocks in a three-phased approach:\n\nCreate the definition and model for the block, review it, and bring it to production.\nCreate content with the new block.\nImplement the decoration and styles for the new block.\n\nThe following quote block example follows this approach.\n\nCreate Block Definition and Model\nClone the GitHub project locally that you created in the Getting Started – Universal Editor Developer Tutorial and open it in an editor of your choice.\nMicrosoft Code is used in this document for illustrative purposes.\n\nEdit the component-definition.json file at the root of the project and add the following definition for your new quote block and save the file.\n{\n  \"title\": \"Quote\",\n  \"id\": \"quote\",\n  \"plugins\": {\n    \"xwalk\": {\n      \"page\": {\n        \"resourceType\": \"core/franklin/components/block/v1/block\",\n        \"template\": {\n          \"name\": \"Quote\",\n          \"model\": \"quote\",\n          \"quote\": \"<p>Think, McFly! Think!</p>\",\n          \"author\": \"Biff Tannen\"\n        }\n      }\n    }\n  }\n}\n\nEdit the component-models.json file at the root of the project and add the following model definition for your new quote block and save the file.\nPlease see the document Content Modeling for AEM authoring as your content source for more information about what is important to consider when creating content models.\n{\n  \"id\": \"quote\",\n  \"fields\": [\n     {\n       \"component\": \"richtext\",\n       \"name\": \"quote\",\n       \"value\": \"\",\n       \"label\": \"Quote\",\n       \"valueType\": \"string\"\n     },\n     {\n       \"component\": \"text\",\n       \"valueType\": \"string\",\n       \"name\": \"author\",\n       \"label\": \"Author\",\n       \"value\": \"\"\n     }\n   ]\n}\n\nEdit the component-filters.json file at the root of the project and add the quote block to the filter definition to allow the block to be added to any section and save the file.\n{\n  \"id\": \"section\",\n  \"components\": [\n    \"text\",\n    \"image\",\n    \"button\",\n    \"title\",\n    \"hero\",\n    \"cards\",\n    \"columns\",\n    \"quote\"\n   ]\n}\n\nUsing git, commit these changes to your main branch.\nCommitting to main is for illustrative purposes only. Follow best practices and use a pull request for actual project work.\nCreate content with the block\n\nNow that your basic quote block is defined and committed to the sample project, you can add a quote block to an existing page.\n\nIn a browser, sign into AEM as a Cloud Service. Using the Sites console, navigate to the site that you created in the Getting Started – Universal Editor Developer Tutorial and select a page.\nIn this case, index is used for illustrative purposes.\n\nTap or click Edit in the toolbar of the console and the Universal Editor opens.\nIn order to load the page, you may need to tap or click Sign in with Adobe to authenticate to AEM in the Universal Editor.\nIn the Universal Editor, select a section. In the properties panel, tap or click the Add icon and then select your new Quote block from the menu.\nThe Add icon is a plus symbol.\nYou know that you have selected a section if the blue outline of the selected object has a tab labeled Section.\nIn this example, tapping or clicking slightly above the Lorem Ipsum heading selects a section containing the heading and lorem ipsum text.\n\nThe page is reloaded and the quote block is added to the bottom of the selected section with the default content specified in the component-definitions.json file.\nThe quote block can be selected and edited as any other block either in-place or in the properties panel.\nStyling will be applied in a further step.\n\nOnce you are satisfied with the content of your quote, you can publish the page by tapping or clicking the Publish button in the toolbar of the Universal Editor.\nVerify that the content was published by navigating to the published page. The link will be similar to https://<branch>--<repo>--<owner>.aem.page\n\nStyle the block\n\nNow that you have a working quote block you can apply styling to it.\n\nReturn to the editor for your project.\nCreate a quote folder under the blocks folder.\n\nIn the new quote folder, add a quote.js file to implement block decoration by adding the following JavaScript and save the file.\nexport default function decorate(block) {\n  const [quoteWrapper] = block.children;\n\n  const blockquote = document.createElement('blockquote');\n  blockquote.textContent = quoteWrapper.textContent.trim();\n  quoteWrapper.replaceChildren(blockquote);\n}\n\nIn the quote folder, add a quote.css file to define the styling for the block by adding the following CSS code and save the file.\n.block.quote {\n    background-color: #ccc;\n    padding: 0 0 24px;\n    display: flex;\n    flex-direction: column;\n    margin: 1rem 0;\n}\n\n.block.quote blockquote {\n    margin: 16px;\n    text-indent: 0;\n}\n\n.block.quote > div:last-child > div {\n    margin: 0 16px;\n    font-size: small;\n    font-style: italic;\n    position: relative;\n}\n\n.block.quote > div:last-child > div::after {\n    content: \"\";\n    display: block;\n    position: absolute;\n    left: 0;\n    bottom: -8px;\n    height: 5px;\n    width: 30px;\n    background-color: darkgray;\n}.block.quote {\n    background-color: #ccc;\n    padding: 0 0 24px;\n    display: flex;\n    flex-direction: column;\n    margin: 1rem 0;\n}\n\n.block.quote blockquote {\n    margin: 16px;\n    text-indent: 0;\n}\n\n.block.quote > div:last-child > div {\n    margin: 0 16px;\n    font-size: small;\n    font-style: italic;\n    position: relative;\n}\n\n.block.quote > div:last-child > div::after {\n    content: \"\";\n    display: block;\n    position: absolute;\n    left: 0;\n    bottom: -8px;\n    height: 5px;\n    width: 30px;\n    background-color: darkgray;\n}\n\nUsing git, commit these changes to your main branch.\nCommitting to main is for illustrative purposes only. Follow best practices and use a pull request for actual project work.\nReturn to your browser tab of the Universal Editor where you were editing the page of your project and reload the page to view your styled block.\nSee the now styled quote block on the page.\n\nVerify that the changes were pushed to production by navigating to the published page. The link will be similar to https://<branch>--<repo>--<owner>.aem.page\n\nCongratulations! You now have a fully working and styled quote block. You can use this example as a basis for designing your own project-specific blocks.\n\nBlock options\n\nIf you need a block to look or behave slightly differently based on certain circumstances (but not different enough to become a new block in itself), you can let authors choose from block options.\n\nBy adding a classes property to the block, the property is rendered in the table header for simple blocks, or as value list for items in a container block.\n\n{\n    \"id\": \"quote\",\n    \"fields\": [\n       {\n         \"component\": \"richtext\",\n         \"name\": \"quote\",\n         \"value\": \"\",\n         \"label\": \"Quote\",\n         \"valueType\": \"string\"\n       },\n       {\n         \"component\": \"text\",\n         \"valueType\": \"string\",\n         \"name\": \"author\",\n         \"label\": \"Author\",\n         \"value\": \"\"\n       },\n       {\n        \"component\": \"select\",\n        \"name\": \"classes\",\n        \"value\": \"\",\n        \"label\": \"Background Color\",\n        \"description\": \"The quote background color\",\n        \"valueType\": \"string\",\n        \"options\": [\n          {\n            \"name\": \"Red\",\n            \"value\": \"bg-red\"\n          },\n          {\n            \"name\": \"Green\",\n            \"value\": \"bg-green\"\n          },\n          {\n            \"name\": \"Blue\",\n            \"value\": \"bg-blue\"\n          }\n        ]\n      }\n     ]\n  }\n\nPreserving Universal Editor Instrumentation for DOM Mutations\n\nTypically there is no need to change the DOM structure of a block beyond just adding class names or adding additional wrappers. In the exceptional case when you must move pieces of content into a new structure in order to fit the final design or output requirements of the block, special care must be taken to preserve the editing experience.\n\nIf the original element contains Universal Editor instrumentation attributes (data-aue-*), these attributes must be retained to preserve the in-context editing experience. To do this, use the moveInstrumentation() method available in the scripts.js file of the boilerplate.\n\nFor an example implementation, refer to cards block. The cards block is a container block where each row of the underlying block table models a single card. The decoration converts this structure into an <ul> with an <li> for each card. To preserve the in-context editing experience it moves the instrumentation from each card’s <div> elements to the newly created <li> element.\n\nUsing other working branches\n\nThis guide had you commit directly to the main branch for simplicity’s sake. For experimentation in a sample repository, this is usually not an issue. For actual project work, you should follow development best practices by developing on a different branch and reviewing all changes via pull request before merging to main.\n\nWhen you are not developing on the main branch, you can append ?ref=<branch> in the Universal Editor location bar to load the page from your branch. <branch> is the branch name as it would be used for your project’s preview or live URLs, e.g. https://<branch>--<repo>--<owner>.aem.page.\n\nBlocks for AEM authoring and document-based authoring\n\nOn certain projects, you may want to support both AEM authoring as your content source using the Universal Editor as well as document-based authoring. To minimize development time and ensure the same site experience, you can create one set of blocks to support both use cases.\n\nTo do this, you must use the same content modeling approach for both your AEM authoring setup as well as your document-based authoring setup.\n\nApproach\n\nIn AEM authoring, you declare a model and provide naming conventions. Data is then rendered in table-like block structures using Edge Delivery in the same way as if the table would have been created manually using document-based authoring.\n\nTo achieve this, certain assumptions are made such as for a simple block like a teaser that all properties and groups of properties are rendered in 1…n rows with a single column each. For blocks that have 1…n items (such as carousel and cards) the items are appended after these rows with one row each and a column for each property/group of properties.\n\nIf you follow the same approach for document-based authoring you can reuse your AEM authoring blocks.\n\nExample\n\nThe following example of a teaser block follows the recommended approach and could be used between document-based and AEM authoring.\n\nWith this data:\n\n{\n  \"name\": \"teaser\",\n  \"model\": \"teaser\",\n  \"image\": \"/content/dam/teaser-background.png\",\n  \"imageAlt\": \"A group of people sitting on a stage\",\n  \"teaserText_subtitle\": \"Adobe Experience Cloud\"\n  \"teaserText_title\": \"Meet the Experts\"\n  \"teaserText_titleType\": \"h2\"\n  \"teaserText_description\": \"<p>Join us in this ask me everything session...</p>\"\n  \"teaserText_cta1\": \"https://link.to/more-details\",\n  \"teaserText_cta1Text\": \"More Details\"\n  \"teaserText_cta2\": \"https://link.to/sign-up\",\n  \"teaserText_cta2Text\": \"RSVP\",\n  \"teaserText_cta2Type\": \"primary\"\n}\n\n\nYou get this markup:\n\n<div class=\"teaser\">\n  <div>\n    <div>\n      <picture>\n        <img src=\"/content/dam/teaser-background.png\" alt=\"A group of people sitting on a stage\">\n      </picture>\n    </div>\n  </div>\n  <div>\n    <div>\n      <p>Adobe Experience Cloud</p>\n      <h2>Meet the Experts</h2>\n      <p>Join us in this ask me everything session ...</p>\n      <p><a href=\"https://link.to/more-details\">More Details</a></p>\n      <p><strong><a href=\"https://link.to/sign-up\">RSVP</a></strong></p>\n    </div>\n  </div>\n</div>\n\n\nAnd it will be turned into this table representation:\n\n+-------------------------------------------------+\n| Teaser                                          |\n+=================================================+\n| ![A group of people sitting on a stage][image0] |\n+-------------------------------------------------+\n| Adobe Experience Cloud                          |\n| ## Meet the Experts                             |\n| Join us in this ask me everything session ...   |\n| [More Details](https://link.to/more-details)    |\n| [RSVP](https://link.to/sign-up)                 |\n+-------------------------------------------------+\n\nNext steps\n\nNow that you know how to create blocks, it is essential to understand how to model content in a semantic way to achieve a lean developer experience.\n\nPrevious\n\nGetting Started – Universal Editor Developer Tutorial\n\nUp Next\n\nContent Modeling","lastModified":"1762424619","labs":""},{"path":"/developer/component-model-definitions","title":"Content modeling for AEM authoring projects","image":"/developer/media_115e97240aa1291be8c8b62cb784026e02859b231.png?width=1200&format=pjpg&optimize=medium","description":"Learn how content modeling works for projects using AEM authoring as a content source and how to model your own content.","content":"style\ncontent\n\nContent modeling for AEM authoring projects\n\nLearn how content modeling works for projects using AEM authoring as a content source and how to model your own content.\n\nPrerequisites\n\nProjects using AEM authoring as a content source inherit the majority of the mechanics of any other Edge Delivery Services project, independent of the content source or authoring method.\n\nBefore you begin modeling content for your project, make sure you first read the following documentation.\n\nGetting Started – Universal Editor Developer Tutorial\nMarkup, Sections, Blocks, and Auto Blocking\nBlock Collection\n\nIt is essential to understand those concepts in order to create a compelling content model that works in a content source-agnostic way. This document provides details about the mechanics implemented specifically for AEM authoring.\n\nDefault content\n\nDefault content is content an author intuitively would put on a page without adding any additional semantics. This includes text, headings, links, and images. Such content is self-explanatory in its function and purpose.\n\nIn AEM, this content is implemented as components with very simple, pre-defined models, which include everything that can be serialized in Markdown and HTML.\n\nText: Rich text (including list elements and strong or italic text)\nTitle: Text, type (h1-h6)\nImage: Source, description\nButton: Text, title, url, type (default, primary, secondary)\n\nThe model of these components is part of the boilerplate for projects with AEM authoring as the content source.\n\nBlocks\n\nBlocks are used to create richer content with specific styles and functionality. In contrast to default content, blocks do require additional semantics.\n\nBlocks are essentially pieces of content decorated by JavaScript and styled with a stylesheet.\n\nBlock model definition\n\nWhen using AEM authoring as your content source, the content of blocks must be modelled explicitly in order to provide the author the interface to create content. Essentially you need to create a model so the authoring UI knows what options to present to the author based on the block.\n\nThe component-models.json file defines the model of blocks. The fields defined in the component model are persisted as properties in AEM and rendered as cells in the table that makes up a block.\n\n{\n  \"id\": \"hero\",\n  \"fields\": [\n    {\n      \"component\": \"reference\",\n      \"valueType\": \"string\",\n      \"name\": \"image\",\n      \"label\": \"Image\",\n      \"multi\": false\n    },\n    {\n      \"component\": \"text-input\",\n      \"valueType\": \"string\",\n      \"name\": \"imageAlt\",\n      \"label\": \"Alt\",\n      \"value\": \"\"\n    },\n    {\n      \"component\": \"text-area\",\n      \"name\": \"text\",\n      \"value\": \"\",\n      \"label\": \"Text\",\n      \"valueType\": \"string\"\n    }\n  ]\n}\n\n\nNote that not every block must have a model. Some blocks are simply containers for a list of children, where each child has its own model.\n\nIt is also necessary to define which blocks exist and can be added to a page using the Universal Editor. The component-definitions.json file lists the components as they are made available by the Universal Editor.\n\n{\n  \"title\": \"Hero\",\n  \"id\": \"hero\",\n  \"plugins\": {\n    \"xwalk\": {\n      \"page\": {\n        \"resourceType\": \"core/franklin/components/block/v1/block\",\n        \"template\": {\n          \"name\": \"Hero\",\n          \"model\": \"hero\"\n        }\n      }\n    }\n  }\n}\n\n\nIt is possible to use one model for many blocks. For example, some blocks may share a model that defines a text and image.\n\nFor each block, the developer:\n\nMust use the core/franklin/components/block/v1/block resource type, the generic implementation of the block logic in AEM.\nMust define the block name, which will be rendered in the block’s table header.\nThe block name is used to fetch the right style and script to decorate the block.\nCan define a model ID.\nThe model ID is a reference to the component’s model, which defines the fields available to the author in the properties panel.\nCan define a filter ID.\nThe filter ID is a reference to the component’s filter, which allows to change the authoring behavior, for example by limiting which children can be added to the block or section, or which RTE features are enabled.\n\nAll of this information is stored in AEM when a block is added to a page. If either the resource type or block name are missing, the block will not render on the page.\n\nWARNING: While possible, it is not necessary or recommended to implement custom AEM components. The components for Edge Delivery Services provided by AEM are sufficient and offer certain guard rails to ease development.\n\nThe components provided by AEM render a markup that can be consumed by helix-html2md when publishing to Edge Delivery Services and by aem.js when loading a page in the Universal Editor. The markup is the stable contract between AEM and the other parts of the system, and does not allow for customizations. For this reason, projects must not change the components and must not use custom components.\n\nBlock structure\n\nThe properties of blocks are defined in the component models and persisted as such in AEM. Properties are rendered as cells in the block’s table-like structure.\n\nSimple blocks\n\nIn the simplest form, a block renders each property in a single row/column in the order the properties are defined in the model.\n\nIn the following example, the image is defined first in the model and the text second. They are thus rendered with the image first and text second.\n\nWith this data:\n\n{\n  \"name\": \"Hero\",\n  \"model\": \"hero\",\n  \"image\": \"/content/dam/image.png\",\n  \"imageAlt\": \"Helix - a shape like a corkscrew\",\n  \"text\": \"<h1>Welcome to AEM</h1>\"\n}\n\n\nYou get this markup:\n\n<div class=\"hero\">\n  <div>\n    <div>\n      <picture>\n        <img src=\"/content/dam/image.png\" alt=\"Helix - a shape like a corkscrew\">\n      </picture>\n    </div>\n  </div>\n  <div>\n    <div>\n      <h1>Welcome to AEM</h1>\n    </div>\n  </div>\n</div>\n\n\nAnd it will be turned into this table representation:\n\n+---------------------------------------------+\n| Hero                                        |\n+=============================================+\n| ![Helix - a shape like a corkscrew][image0] |\n+---------------------------------------------+\n| # Welcome to AEM                            |\n+---------------------------------------------+\n\n\nYou may notice that some types of values allow inferring semantics in the markup, and properties are combined into single cells. This behavior is described in the section Type inference.\n\nKey-value block\n\nIn many cases, it is recommended to decorate the rendered semantic markup, add CSS class names, add new nodes or move them around in the DOM, and apply styles.\n\nIn other cases however, the block is read as a key-value pair-like configuration.\n\nAn example of this is the section metadata. In this use case, the block can be configured to render as a key-value pair table. Please see the section Sections and Section Metadata for more information.\n\nWith this data:\n\n{\n  \"name\": \"Featured Articles\",\n  \"model\": \"spreadsheet-input\",\n  \"key-value\": true,\n  \"source\": \"/content/site/articles.json\",\n  \"keywords\": ['Developer','Courses'],\n  \"limit\": 4\n}\n\n\nYou get this markup:\n\n<div class=\"featured-articles\">\n  <div>\n    <div>source</div>\n    <div><a href=\"/content/site/articles.json\">/content/site/articles.json</a></div>\n  </div>\n  <div>\n    <div>keywords</div>\n    <div>Developer,Courses</div>\n  <div>\n  <div>\n    <div>limit</div>\n    <div>4</div>\n  </div>\n</div>\n\n\nAnd it will be turned into this table representation:\n\n+-----------------------------------------------------------------------+\n| Featured Articles                                                     |\n+=======================================================================+\n| source   | [/content/site/articles.json](/content/site/articles.json) |\n+-----------------------------------------------------------------------+\n| keywords | Developer,Courses                                          |\n+-----------------------------------------------------------------------+\n| limit    | 4                                                          |\n+-----------------------------------------------------------------------+\n\nContainer blocks\n\nBoth of the previous structures have a single dimension: the list of properties. Container blocks allow adding children (usually of the same type or model) and hence are two-dimensional. These blocks still support their own properties rendered as rows with a single column first. But they also allow adding children, for which each item is rendered as a row and each property as a column within that row.\n\nIn the following example, a block accepts a list of linked icons as children, where each linked icon has an image and a link. Notice the filter ID set in the data of the block in order to reference the filter configuration.\n\nWith this data:\n\n{\n  \"name\": \"Our Partners\",\n  \"model\": \"text-only\",\n  \"filter\": \"our-partners\",\n  \"text\": \"<p>Our community of partners is ...</p>\",\n  \"item_0\": {\n    \"model\": \"linked-icon\",\n    \"image\": \"/content/dam/partners/foo.png\",\n    \"imageAlt\": \"Icon of Foo\",\n    \"link\": \"https://foo.com/\"\n  },\n  \"item_1\": {\n    \"model\": \"linked-icon\"\n    \"image\": \"/content/dam/partners/bar.png\",\n    \"imageAlt\": \"Icon of Bar\",\n    \"link\": \"https://bar.com\"\n  }\n}\n\n\nYou get this markup:\n\n<div class=\"our-partners\">\n  <div>\n    <div>\n        Our community of partners is ...\n    </div>\n  </div>\n  <div>\n    <div>\n      <picture>\n         <img src=\"/content/dam/partners/foo.png\" alt=\"Icon of Foo\">\n      </picture>\n    </div>\n    <div>\n      <a href=\"https://foo.com\">https://foo.com</a>\n    </div>\n  </div>\n  <div>\n    <div>\n      <picture>\n         <img src=\"/content/dam/partners/bar.png\" alt=\"Icon of Bar\">\n      </picture>\n    </div>\n    <div>\n      <a href=\"https://bar.com\">https://bar.com</a>\n    </div>\n  </div>\n</div>\n\n\nAnd it will be turned into this table representation:\n\n+------------------------------------------------------------ +\n| Our Partners                                                |\n+=============================================================+\n| Our community of partners is ...                            |\n+-------------------------------------------------------------+\n| ![Icon of Foo][image0] | [https://foo.com](https://foo.com) |\n+-------------------------------------------------------------+\n| ![Icon of Bar][image1] | [https://bar.com](https://bar.com) |\n+-------------------------------------------------------------+\n\nColumns Block\n\nThe columns block is a bit different from the standard block types. Unlike those block types where content modeling is all about creating semantic HTML, the columns block is about defining a layout. If you set the number of rows and/or columns, it accordingly renders an appropriate table you can style with the block option classes.\n\nFor this reason, columns blocks have the following limitations.\n\nThey offer no content modeling.\nThey can only have rows, columns and classes (or classes_*) properties.\nYou can only add default content (text, title, image, link/button) to the cells.\nCreating semantic content models for blocks\n\nWith the mechanics of block structure explained, it is possible to create a content model that maps content persisted in AEM one-to-one to the delivery tier.\n\nEarly in every project, a content model must be carefully considered for every block. It must be agnostic to the content source and authoring experience in order to allow authors to switch or combine them while reusing block implementations and styles. More details and general guidance can be found in David’s Model (take 2). More specifically, the block collection contains an extensive set of content models for specific use cases of common user interface patterns.\n\nFor AEM authoring as your content source, this raises the question how to serve a compelling semantic content model when the information is authored with forms composed of multiple fields instead of editing semantic markup in-context like rich text.\n\nTo solve this problem, there are three methods that facilitate creating a compelling content model:\n\nType Inference\nField Collapse\nElement Grouping\n\nNOTE: Block implementations can deconstruct the content and replace the block with a client-side-rendered DOM. While this is possible and intuitive for a developer, it is not the best practice for Edge Delivery Services.\n\nType inference\n\nFor some values we can infer the semantic meaning from the values itself. Such values include:\n\nImages - If a reference to a resource in AEM is an asset with a MIME type starting with image/, the reference is rendered as <picture><img src=\"${reference}\"></picture>.\nLinks - If a reference exists in AEM and is not an image, or if the value starts with https?:// or #, the reference is rendered as <a href=\"${reference}\">${reference}</a>.\nRich text - If a trimmed value starts with a paragraph (p, ul, ol, h1-h6, etc.), the value is rendered as rich text.\nClass names - The classes property is treated as block options and rendered in the table header for simple blocks, or as value list for items in a container block. It is useful if you want to style a block differently, but don’t need to create an entirely new block. It is possible to use multiple properties as block options using Element Grouping.\nValue lists - If a value is a multi-value property and the first value is none of the previous, all values are concatenated as a comma-separated list.\n\nEverything else will be rendered as plain text.\n\nField collapse\n\nField collapse is the mechanism to combine multiple field values into a single semantic element based on a naming convention using the suffixes Title, Type, MimeType, Alt, and Text (all case sensitive). Any property ending with any of those suffixes will not be considered a value, but rather as an attribute of another property.\n\nImages\n\nWith this data:\n\n{\n  \"image\": \"/content/dam/red-car.png\",\n  \"imageAlt\": \"A red card on a road\"\n}\n\n\nYou get this markup:\n\n<picture>\n  <img src=\"/content/dam/red-car.png\" alt=\"A red car on a road\">\n</picture>\n\n\nAnd it will be turned into this table representation:\n\n![A red car on a road][image0]\n\nLinks and buttons\n\nWith this data:\n\n{\n  \"link\": \"https://www.adobe.com\",\n  \"linkTitle\": \"Navigate to adobe.com\",\n  \"linkText\": \"adobe.com\",\n  \"linkType\": \"primary\"\n}\n\n\nYou get this markup:\n\n<a href=\"https://www.adobe.com\" title=\"Navigate to adobe.com\">adobe.com</a>\n\nAnd it will be turned into this table representation:\n\n[adobe.com](https://www.adobe.com \"Navigate to adobe.com\")\n**[adobe.com](https://www.adobe.com \"Navigate to adobe.com\")**\n_[adobe.com](https://www.adobe.com \"Navigate to adobe.com\")_\n\nHeadings\n\nWith this data:\n\n{\n  \"heading\": \"Getting started\",\n  \"headingType\": \"h2\"\n}\n\n\nYou get this markup:\n\n<h2>Getting started</h2>\n\nAnd it will be turned into this table representation:\n\n## Getting started\n\nElement grouping\n\nWhile field collapse is about combining multiple properties into a single semantic element, element grouping is about concatenating multiple semantic elements into a single cell. This is particularly helpful for use cases where the author should be restricted in the type and number of elements that they can create.\n\nFor example, a teaser component may allow the author to only create a subtitle, title, and a single paragraph description combined with a maximum of two call-to-action buttons. Grouping these elements together yields a semantic markup that can be styled without further action.\n\nElement grouping uses a naming convention, where the group name is separated from each property in the group by an underscore. Field collapse of the properties in a group works as previously described.\n\nIf you add a group and there is already a field that has the group name, the field will become part of the group as well, so that element grouping can be added to blocks that didn’t previously use it, without having to migrate content.\n\nWith this data:\n\n{\n  \"name\": \"teaser\",\n  \"model\": \"teaser\",\n  \"image\": \"/content/dam/teaser-background.png\",\n  \"imageAlt\": \"A group of people sitting on a stage\",\n  \"teaserText_subtitle\": \"Adobe Experience Cloud\"\n  \"teaserText_title\": \"Meet the Experts\"\n  \"teaserText_titleType\": \"h2\"\n  \"teaserText_description\": \"<p>Join us in this ask me everything session...</p>\"\n  \"teaserText_cta1\": \"https://link.to/more-details\",\n  \"teaserText_cta1Text\": \"More Details\"\n  \"teaserText_cta2\": \"https://link.to/sign-up\",\n  \"teaserText_cta2Text\": \"RSVP\",\n  \"teaserText_cta2Type\": \"primary\"\n}\n\n\nYou get this markup:\n\n<div class=\"teaser\">\n  <div>\n    <div>\n      <picture>\n        <img src=\"/content/dam/teaser-background.png\" alt=\"A group of people sitting on a stage\">\n      </picture>\n    </div>\n  </div>\n  <div>\n    <div>\n      <p>Adobe Experience Cloud</p>\n      <h2>Meet the Experts</h2>\n      <p>Join us in this ask me everything session ...</p>\n      <p><a href=\"https://link.to/more-details\">More Details</a></p>\n      <p><strong><a href=\"https://link.to/sign-up\">RSVP</a></strong></p>\n    </div>\n  </div>\n</div>\n\n\nAnd it will be turned into this table representation:\n\n+-------------------------------------------------+\n| Teaser                                          |\n+=================================================+\n| ![A group of people sitting on a stage][image0] |\n+-------------------------------------------------+\n| Adobe Experience Cloud                          |\n| ## Meet the Experts                             |\n| Join us in this ask me everything session ...   |\n| [More Details](https://link.to/more-details)    |\n| [RSVP](https://link.to/sign-up)                 |\n+-------------------------------------------------+\n\nElement Grouping for Block Options\n\nAs previously described, the classes property can be used to specify the block options of a block and children in a container block. Usually a select or multiselect field is used to author the block options as a single field. However, if there are multiple options and if some of them are mutually exclusive, it is easier for authors to use multiple fields to select from for individual block options. This is possible using the element grouping naming convention as described previously.\n\nWith the data:\n\n{\n  \"name\": \"teaser\",\n  \"model\": \"teaser\",\n  \"classes\": \"variant-a\",\n  \"classes_background\": \"light\",\n  \"classes_fullwidth\": true,\n}\n\n\nYou get this markup:\n\n<div class=\"teaser variant-a light fullwidth\">\n  ...\n</div>\n\n\nEach field in the classes group can be text, an array of texts, or a boolean. In the example above, fields with a boolean value are handled slightly differently than fields with texts. For a boolean field, the property name excluding the classes group name will be added as a block option. This is particularly helpful when a toggle is used to turn a block option on or off.\n\nMulti-Fields and Composite Multi-Fields\nEarly-access technology\n\nThis feature is currently available as an early-access technology and includes breaking changes. It can be enabled upon request through Adobe. Ask us about this feature on your Teams or Slack channel!\n\nThe Universal Editor supports advanced modeling scenarios using multi fields and containers. These features allow authors to manage lists of structured or unstructured content elements with flexible rendering.\n\nUse Cases\n\nMulti-fields and composite multi-fields are ideal for handling structured content in use cases like:\n\nSelecting multiple content fragments in a Tabs block (reference field with multi=true)\nCreating keyword lists (text field with multi=true)\nDefining multiple CTAs (container field with multi=true)\nBuilding image carousels (either single reference field with multi=true or container field with multi=true including a reference and a text field for the alt text)\nRendering Behavior\nWhen the contained items are single semantic elements (e.g., plain text, links, images), they are rendered as a structured list using <ul> and <li> tags. For example:\n<ul>\n  <li>dog</li>\n  <li>cat</li>\n</ul>\n\nWhen the contained items include multiple semantic elements (e.g., combinations of text, richtext, links), they are rendered as a flat list, with each item separated by an <hr> tag. For example:\n<hr>\n<p>First paragraph</p>\n<p>Second paragraph</p>\n<hr>\n<pre>Third paragraph</pre>\n<p>Fourth paragraph</p>\n<hr>\n\n\nSpecial Cases\n\nWhen a multi-field contains only one item, it is rendered as a paragraph rather than a list.\nFor richtext multi-fields inside a composite multi-field, only the first richtext value is rendered. Additional entries are ignored.\nExamples for Multi-Fields\nMulti-Field with Links\n\nWith this modelling:\n\n{\n    \"component\": \"aem-content\",\n    \"name\": \"links\",\n    \"multi\": true\n}\n\n\nAnd this data:\n\nlinks = [ \n    \"https://www.google.com\", \n    \"https://www.facebook.com\"\n];\n\n\nYou will get this markup:\n\n<ul>\n  <li><a href=\"https://www.google.com\">https://www.google.com</a></li>\n  <li><a href=\"https://www.facebook.com\">https://www.facebook.com</a></li>\n</ul>\n\nMulti-Field with Images\n\nWith this modelling:\n\n{\n    \"component\": \"reference\",\n    \"name\": \"images\",\n    \"multi\": true\n}\n\n\nAnd this data:\n\nimages = [ \n    \"/content/dam/images/dog.jpg\", \n    \"/content/dam/images/cat.jpg\"\n];\n\n\nYou will get this markup:\n\n<ul>\n  <li><picture><img src=\"/content/dam/images/dog.jpg\" /></picture></li>\n  <li><picture><img src=\"/content/dam/images/cat.jpg\" /></picture></li>  \n</ul>\n\nMulti-Field with Rich Text\n\nWith this modelling:\n\n{\n    \"component\": \"richtext\",\n    \"name\": \"richtexts\",\n    \"multi\": true\n}\n\n\nAnd this data:\n\nrichtexts = [ \n    \"<p>First paragraph</p><p>Second paragraph</p>\", \n    \"<pre>Third paragraph</pre><p>Fourth paragraph</p>\", \n];\n\n\nYou will get this markup:\n\n<hr>\n<p>First paragraph</p>\n<p>Second paragraph</p>\n<hr>\n<pre>Third paragraph</pre>\n<p>Fourth paragraph</p>\n<hr>\n\nExamples for Composite Multi-Fields\nComposite Multi-Field with an Image and an alt Text\n\nWith this modelling:\n\n{\n \"component\": \"container\",\n \"name\": \"images\",\n \"multi\": true,\n \"fields\" : [\n   {\n     \"component\": \"reference\",\n     \"name\": \"image\"\n   },\n   {\n     \"component\": \"text\",\n     \"name\": \"imageAlt\"\n   }\n ]\n}\n\n\nAnd this data:\n\nimages = [\n {\n   \"image\": \"/content/dam/images/dog.jpg\",\n   \"imageAlt\": \"dog\"\n },\n {\n   \"image\": \"/content/dam/images/cat.jpg\",\n   \"imageAlt\": \"cat\"\n }\n];\n\n\nYou will get this markup:\n\n<ul>\n  <li><picture><img src=\"/content/dam/images/dog.jpg\" alt=\"dog\" /></picture></li>\n  <li><picture><img src=\"/content/dam/images/cat.jpg\" alt=\"cat\" /></picture></li>  \n</ul>\n\nComposite Multi-Field with a Text and a Link\n\nWith this modelling:\n\n{\n \"component\": \"container\",\n \"name\": \"ctas\",\n \"multi\": true,\n \"fields\" : [\n   {\n     \"component\": \"richtext\",\n     \"name\": \"text\"\n   },\n   {\n     \"component\": \"aem-content\",\n     \"name\": \"link\"\n   },\n   {\n     \"component\": \"text\",\n     \"name\": \"linkText\"\n   }\n ]\n}\n\n\nAnd this data:\n\nctas = [\n {\n   \"text\": \"<p>Find one of our stores near you</p>\" \n   \"link\": \"https://www.google.com/maps/search/company\"\n   \"linkText\": \"Google Maps\",\n },\n {\n   \"text\": \"<p>Follow us on facebook.com</p>\" \n   \"link\": \"http://www.facebook.com/company\"\n   \"linkText\": \"Facebook\",\n }\n]\n\n\nYou will get this markup:\n\n<hr>\n<p>Find one of our stores near you</p>\n<p><a href=\"https://www.google.com/maps/search/company\">Google Maps</a></p>\n<hr>\n<p>Follow us on facebook</p>\n<p><a href=\"http://www.facebook.com/compoany\">Facebook</a></p>\n<hr>\n\nSections and section metadata\n\nThe same way a developer can define and model multiple blocks, they can define different sections.\n\nThe content model of Edge Delivery Services deliberately allows only a single level of nesting, which is any default content or block contained by a section. This means in order to have more complex visual components that can contain other components, they have to be modelled as sections and combined together using auto-blocking client side. Typical examples of this are tabs and collapsible sections like accordions.\n\nA section can be defined in the same way as a block, but with the resource type of core/franklin/components/section/v1/section. Sections can have a name and a filter ID, which are used by the Universal Editor only, as well as a model ID, which is used to render the section metadata. The model is in this way the model of the section metadata block, which will automatically be appended to a section as a key-value block if it is not empty.\n\nThe model ID and filter ID of the default section is section. It can be used to alter the behavior of the default section. The following example adds some styles and a background image to the section metadata model.\n\n{\n  \"id\": \"section\",\n  \"fields\": [\n    {\n      \"component\": \"multiselect\",\n      \"name\": \"style\",\n      \"value\": \"\",\n      \"label\": \"Style\",\n      \"valueType\": \"string\",\n      \"options\": [\n        {\n          \"name\": \"Fade in Background\",\n          \"value\": \"fade-in\"\n        },\n        {\n          \"name\": \"Highlight\",\n          \"value\": \"highlight\"\n        }\n      ]\n    },\n    {\n      \"component\": \"reference\",\n      \"valueType\": \"string\",\n      \"name\": \"background\",\n      \"label\": \"Image\",\n      \"multi\": false\n    }\n  ]\n}\n\n\nThe following example defines a tab section, which can be used to create a tabs block by combining consecutive sections with a tab title data attribute into a tabs block during auto-blocking.\n\n{\n  \"title\": \"Tab\",\n  \"id\": \"tab\",\n  \"plugins\": {\n    \"xwalk\": {\n      \"page\": {\n        \"resourceType\": \"core/franklin/components/section/v1/section\",\n        \"template\": {\n          \"name\": \"Tab\",\n          \"model\": \"tab\",\n          \"filter\": \"section\"\n        }\n      }\n    }\n  }\n}\n\nPage metadata\n\nDocuments can have a page metadata block, which is used to define which <meta> elements are rendered in the <head> of a page. The page properties of pages in AEM as a Cloud Service map to those that are available out-of-the-box for Edge Delivery Services, like title, description, keywords, etc.\n\nBefore further exploring how to define your own metadata, please review the following documents to understand the concept of page metadata first.\n\nMetadata\nBulk metadata\n\nIt is also possible to define additional page metadata in two ways.\n\nMetadata spreadsheets\n\nIt is possible to define metadata on a per path or per path pattern basis in a table-like way in AEM as a Cloud Service. There is an authoring UI for table-like data available that is similar to Excel or Google Sheets.\n\nFor further details, please see the document Using Spreadsheets to Manage Tabular Data for more information.\n\nPage properties\n\nMany of the default page properties available in AEM are mapped to the respective page metadata in a document. That includes for example title, description, robots, canonical url or keywords. Some AEM-specific properties are available as well:\n\ncq:lastModified as modified-time in ISO8601 format\nThe time the document was last published as published-time in ISO8601 format\ncq:tags as cq-tags as a comma-separated list of the tag IDs.\n\nIt is also possible to define a component model for custom page metadata, which will be made available to the author in the Universal Editor.\n\nTo do so, create a component model with the ID page-metadata.\n\n{\n  \"id\": \"page-metadata\",\n  \"fields\": [\n    {\n      \"component\": \"text\",\n      \"name\": \"theme\",\n      \"label\": \"Theme\"\n    }\n  ]\n}\n\n\nYou can also create component models per template. To do so, create entries named <template>-metadata, with template being the value stored in the template metadata property. For details on metadata, please see the document Metadata Block.\n\nPrevious\n\nCreating Blocks Instrumented for use with the Universal Editor\n\nUp Next\n\nPath Mapping","lastModified":"1762125413","labs":""},{"path":"/developer/authoring-path-mapping","title":"Path mapping for AEM authoring as your content source","image":"/developer/media_152a6907988019c7269a93117139036b43f773054.png?width=1200&format=pjpg&optimize=medium","description":"To be able to use AEM authoring as your content source and publish your content to Edge Delivery Services, you must set up your project’s ...","content":"style\ncontent\n\nPath mapping for AEM authoring as your content source\n\nTo be able to use AEM authoring as your content source and publish your content to Edge Delivery Services, you must set up your project’s path mapping. This mapping has two purposes.\n\nIt maps and creates a relationship between page paths used on your AEM authoring instance and the public page paths used on your website.\nIt controls which content (pages, sheets, assets, etc.) are published to Edge Delivery Services.\n\nThe path mapping must be configured for each project individually and according to the project’s content and URL structure. It is used by AEM during content publishing and while editing content in the Universal Editor.\n\nConfiguration format\n\nThe format of the path mapping configuration contains two sections (mappings and includes) similar to the following example.\n\n{\n  \"mappings\": [\n    \"/content/aem-boilerplate/:/\",\n    \"/content/aem-boilerplate/configuration:/.helix/config.json\"\n  ],\n  \"includes\": [\n    \"/content/aem-boilerplate/\"\n  ]\n}\n\nmappings\n\nThe mappings configuration holds an array of internal paths (on the AEM authoring instance) and external URL paths (on the public website).\n\nThe format is <internal paths>:<external path>. It typically consists of a minimum of two entries.\n\nThe first entry from the example is the path mapping of the website pages.\nThe second entry controls the mapping of the .helix/config.json to the corresponding spreadsheet page in the AEM authoring repository.\n\nIn this example, all pages stored under /content/aem-boilerplate/... will be publicly accessible on the Edge Delivery Services site directly under https://main--my-site--org.aem.live/.....\n\nTIP: All tabular data managed as spreadsheets (e.g. metadata, redirects, and taxonomy) are typically published as .json API URLs on Edge Delivery Services. To do so, they must be individually listed in the mapping configuration. Please see the document Using Spreadsheets to Manage Tabular Data for more information.\n\nExamples\n../path/:/ maps a folder (both ending in /).\n../path:/anotherpath maps a document to a different path (previously known as a vanity URL).\n../path/en:/folder/ is a special case and maps the document to /folder/index.\n../path/:/en maps a folder to a document and not a typical use case.\n\nAll rules are applied and the last that matches is used. I.e. the order of the rules is from least to most significant.\n\nincludes\n\nThe includes configuration controls which content paths are actually replicated to Edge Delivery Services. It can hold any array of paths as well and typically contains the site’s top-level root page.\n\nAssets used on Edge Delivery Services pages are typically published alongside the webpage. They will be exported from the AEM authoring instance to Edge Delivery Services automatically.\n\nTIP: If you have a use case where you want assets published directly to Edge Delivery Services (for example you would like images or PDFs to be directly accessible by their URLs outside of a page context), you must add the DAM paths to the includes section of the configuration as well. For example, if an asset root folder such as /content/dam/my-site/documents containing a set of PDF should be publicly accessible via /assets/..., an entry must be added to the includes section of the configuration.\n\nHow to configure\n\nYour path mappings can be configured in one of two ways depending on the setup of your project.\n\nIf the project is configured for aem.live and uses the configuration service for centralized configurations, the paths mapping for each site is configured via this configuration service.\nHere is an example cURL request to configure path mappings.\ncurl --request POST \\\n  --url https://admin.aem.page/config/{org}/sites/{site}/public.json \\\n  --header 'Content-Type: application/json' \\\n  --header 'x-auth-token: ......' \\\n  --data '{\n    \"paths\": {\n    \"mappings\": [\n      \"/content/aem-boilerplate/:/\",\n      \"/content/aem-boilerplate/configuration:/.helix/config.json\"\n    ],\n    \"includes\": [\n      \"/content/aem-boilerplate/\"\n    ]\n}\n}'\n\nIf the project does not use the configuration service, the paths mapping is configured via a paths.json file in your project’s GitHub repository.\nSee https://github.com/adobe-rnd/aem-boilerplate-xwalk/blob/main/paths.json for an example.\n\nIn both cases, once you configure your path mappings, you can check the configuration via the publicly-accessible configuration URL https://<branch>--<site>--<org>.aem.page/config.json.\n\nPrevious\n\nPath Mapping\n\nUp Next\n\nAnatomy of an AEM Project","lastModified":"1768390663","labs":""},{"path":"/docs/authoring-tabular-data","title":"Managing tabular data with AEM authoring as your content source","image":"/docs/media_11b9214509f21647767e9965fe77c43e7e9a3478d.png?width=1200&format=pjpg&optimize=medium","description":"For any AEM with Edge Delivery Services site, there is a need to maintain lists of tabular data such as for key-value mappings. These can ...","content":"style\ncontent\n\nManaging tabular data with AEM authoring as your content source\n\nFor any AEM with Edge Delivery Services site, there is a need to maintain lists of tabular data such as for key-value mappings. These can be lists of many different values such as metadata and redirects. Edge Deliver Services allows you to maintain such tabular lists using an intuitive tool: the spreadsheet. AEM translates these spreadsheets into JSON files that can easily be consumed by your website or web application.\n\nCommon use cases include:\n\nPlaceholders\nMetadata\nHeaders\nRedirects\nConfigurations such as for CDN setups\n\nIn addition, you can create your spreadsheets of any structure to store mappings for your own purposes.\n\nThis document uses the example of redirects to illustrate how to create such spreadsheets. See the previously-linked topics in the Edge Delivery Services documentation for details of each use case.\n\nTIP: For more information on how spreadsheets in general work with Edge Delivery Services, please see the document Spreadsheets and JSON.\n\nTIP: Spreadsheets should only be used to maintain tabular data. For storing structured data, check out AEM’s headless features.\n\nCreating a spreadsheet\n\nIn this example, you will create a spreadsheet to manage redirects for your project with AEM authoring as your content source. The same steps apply to other spreadsheet types that you wish to create.\n\nSign in to your AEM as a Cloud Service authoring instance, go to the Sites console, and navigate to the root of the site which requires a spreadsheet. Tap or click Create → Page.\n\nOn the Template tab of the create page wizard, tap or click the Redirects template to select it and then tap or click Next.\n\nThe Properties tab of the wizard presents the default values for the redirects spreadsheet. Tap or click Create.\nTitle - Leave this value as-is.\nColumns - The minimum columns needed for redirects are prepopulated.\nsource - The page to be redirected\ndestination - The page to redirect to\n\nIn the Success dialog, tap or click Open.\nA new tab opens with the spreadsheet loaded into an editor with the predefined source and destination columns. To define your redirects, tap or click the empty row of the source column. Changes are saved automatically as you edit the spreadsheet.\n\nThe source is relative to the domain of your website, so it only contains the relative path.\nThe destination can be either a fully qualified URL if you are redirecting to a different website, or it can be a relative path if you are redirecting within your own website.\nUse the tab-key to move focus to the next cell.\nThe editor adds new rows to the spreadsheet as necessary.\nTo delete or move a row, use the Delete icon at the end of each row and the drag handles at the beginning of each row, respectively.\nImporting spreadsheet data\n\nIn addition to editing spreadsheets in the AEM Page Editor, you can also import data from a CSV file.\n\nWhen editing your spreadsheet in AEM, tap or click the Upload button at the top-left of the screen.\nIn the drop-down, select how you would like to import your data.\nReplace Doc to replace the content of the entire spreadsheet with the content of the CSV file you will upload.\nAppend To Doc to append the data of the CSV file you will upload to the existing spreadsheet contents.\nIn the dialog that opens, select your CSV file and then tap or click Open.\n\nA dialog opens as the import is processed. Once complete, the data in the CSV file is added to or replaces the content of the spreadsheet. If any errors are encountered such as a mismatch of columns, they are reported so you can correct your CSV file.\n\nNOTE: Keep the following in mind when importing data.\n\nThe headings in the CSV file must match the columns in the spreadsheet exactly.\nImporting the entire CSV does not modify the column headings, only the content rows.\nIf you need to update the columns, you must do that in the AEM Page Editor before performing the import of the CSV.\nA CSV file can not be larger than 10 MB for import.\n\nDepending on your selection of mode, you can also create, replace, or append to spreadsheets using a CSV and a cURL command similar to the following.\n\ncurl --request POST \\\n  --url http://<aem-instance>/bin/asynccommand \\\n  --header 'content-type: multipart/form-data' \\\n  --form file=@/path/to/your.csv \\\n  --form spreadsheetPath=/content/<your-site>/<your-spreadsheet> \\\n  --form 'spreadsheetTitle=Your Spreadsheet' \\\n  --form cmd=spreadsheetImport \\\n  --form operation=asyncSpreadsheetImport \\\n  --form _charset_=utf-8 \\\n  --form mode=append\n\n\nThe call returns an HTML page with information about the job ID.\n\nMessage | Job(Id:2024/9/18/15/27/5cb0cacc-585d-4176-b018-b684ad2dfd02_90) created successfully. Please check status at Async Job Status Navigation.\n\nYou can use the Jobs console to view the status of the job or use the ID returned to query it.\n\nhttps://<aem-instance>/bin/asynccommand?optype=JOBINF&jobid=2024/10/24/14/1/8da63f9e-066b-4134-95c9-21a9c57836a5_1\n\nOther spreadsheet types\n\nNow that you know how to create a redirects spreadsheet, you can create any other standard spreadsheet type:\n\nPlaceholders\nMetadata\nHeaders\nConfiguration - Such as for cache invalidation\nTaxonomy\nConfiguration\n\nSimply follow the same steps in the sections Create Spreadsheet and Publish paths.json and choose the appropriate template and update the paths.json file appropriately.\n\nFor Configuration, Headers and Metadata make sure to add a mapping to publish them to their default locations:\n\nConfiguration: /.helix/config.json\nHeaders: /.helix/headers.json\nMetadata: /metadata.json\nTaxonomy: Please see the document Managing Taxonomy Data for more information.\n\nAdditionally, you can create your own spreadsheet with arbitrary columns for your own use.\n\nNOTE: You do not need to create a spreadsheet to manage indexing for AEM as a Cloud Service with Edge Delivery Services projects. If you wish to create your own indices, please follow this documentation to create your own helix-query.yaml file.\n\nCreating your own spreadsheet\nFollow the same steps in the section Create Spreadsheet.\nWhen selecting the template, choose Spreadsheet.\nIn the Properties tab of the wizard, you can add your own columns.\n\nIn the Columns section, tap or click Add to add a new column.\nProvide a name for the column.\nRemove or reorganize the columns using the Delete and drag handle icons, respectively.\nCreate the spreadsheet and publish as per the instructions for the redirects spreadsheet.\nAdd a mapping to the paths.json file as per the instructions for the redirects spreadsheet.\nPublishing a spreadsheet paths.json (optional)\n\nIn order for AEM to be able to publish the data in your spreadsheet, you may need to update the paths.json file of your project in certain situations.\n\nOpen the root of your project in GitHub.\nTap or click the paths.json file to open its details and then the Edit icon.\n\nAdd a line to map your new spreadsheet to a redirects.json resource.\n{\n  \"mappings\": [\n   \"/content/<site-name>/:/\",\n   \"/content/<site-name>/redirects:/redirects.json\"\n  ]\n}\n\nNOTE This paths.json entry is based on the example of creating redirects using tabular data. Make sure to update the path appropriate to the type of spreadsheet you are creating.\nClick Commit changes… to save the changes to main.\nEither commit to main or create a pull request as per your process.\nWhen you are finished defining your redirects and you updated the path mapping, return to the Sites console.\nTap or click to select the redirects spreadsheet that you created in the console and then tap or click Quick Publish in the actions bar to publish the spreadsheet.\n\nIn the Quick Publish dialog, tap or click Publish.\nA banner confirms the publication.\n\n\nThe redirects spreadsheet is now published and publicly-accessible.\n\nTIP: For more information about path mappings, please see the document Path Mapping for Edge Delivery Services.","lastModified":"1763128514","labs":""},{"path":"/docs/authoring-taxonomy","title":"Managing taxonomy data with AEM authoring as your content source","image":"/docs/media_129b5b9425c45217950e8b0caa67bf1cb1b2d241f.png?width=1200&format=pjpg&optimize=medium","description":"Tagging is an important feature that helps you organize and manage your pages. The Tagging Console in AEM allows you to create a rich taxonomy ...","content":"style\ncontent\n\nManaging taxonomy data with AEM authoring as your content source\n\nTagging is an important feature that helps you organize and manage your pages. The Tagging Console in AEM allows you to create a rich taxonomy of tags to organize your pages.\n\nThese tags are useful not only for you and your authors in organizing your content, but can also be for your readers as well. Tags and their taxonomy can be used in components on the page to help your readers navigate your content.\n\nThe Universal Editor works only with the IDs of your tags. By creating a taxonomy page for your content, you expose the descriptions of these tags in all languages to the Universal Editor so it can use that information when rendering content.\n\nTIP: Please see the document Model Definitions, Fields, and Component Types for more information about the AEM Tag field available to the Universal Editor, which can work with your taxonomy.\n\nCreating a taxonomy page\n\nA taxonomy is created like any other page in AEM.\n\nNavigate to the Sites console.\nSelect the location where you wish to create your taxonomy.\nTap or click Create → Page.\n\nOn the Template tab of the Create Page wizard, select the Taxonomy template and tap or click Next.\n\nOn the Properties tab of the Create Page wizard, provide a meaningful Title for the page and in the Tags field, use the tag picker to select the tag(s) or namespace(s) you wish to include in your taxonomy.\n\nTap or click Create.\n\nThe taxonomy page is created. In the Success dialog, you can tap or click Done dialog to dismiss the message or Open to edit the page in the Page Editor.\n\nTake note of the resulting page name of the taxonomy page for use in the following steps.\n\nEditing a taxonomy page\n\nYou start editing a taxonomy page like any other page in AEM.\n\nNavigate to the Sites console.\nSelect the taxonomy you wish to edit.\nTap or click Edit in the action bar.\nThe Page Editor opens, showing the taxonomy.\nThe taxonomy page is read-only in the Page Editor.\n\nTap or click the Page Information icon in the toolbar and select Open Properties.\n\nIn the Page Properties window, you can update the name of the page and use the tag selector to update the tag(s) and namespace(s) included in your taxonomy.\n\nTap or click Save & Close.\n\nThe page displayed in the Page Editor is read-only because the content of the taxonomy is generated automatically from the selected tag(s) and namespace(s). They act as a kind of filter for automatically generating the content of the taxonomy. Therefore there is no reason to directly edit the page in the editor.\n\nAEM automatically updates the content of the taxonomy page when you update the underlying tag(s) and namespace(s). However you must republish the taxonomy after any change in order to make those changes available to your users.\n\nUpdate paths.json for taxonomy publication\n\nLike when managing and publishing tabular data for your Edge Delivery Services site, you need to update your paths.json file of your project to allow publication of your taxonomy data.\n\nOpen the root of your project in GitHub.\nTap or click the paths.json file to open its details and then the Edit icon.\n\nAdd a line to map your new taxonomy page to a .json resource.\n<taxonomy-page-name> must match the name of the taxonomy page you created.\n<taxonomy-json-name> can be any valid name you choose.\n{\n  \"mappings\": [\n   \"/content/<site-name>/:/\",\n   \"/content/<site-name>/<taxonomy-page-name>:/<taxonomy-json-name>.json\"\n  ]\n}\n\nClick Commit changes… to save the changes to main.\nEither commit to main or create a pull request as per your process.\n\nThis process only needs to be done once per taxonomy page. Once done, you can publish your taxonomy.\n\nTIP: For more information about path mappings, please see the document Path Mapping for Edge Delivery Services.\n\nPublishing a taxonomy\n\nA taxonomy is not available to the Universal Editor or your users until it is published.\n\nTaxonomy pages are published like any other page by using the Quick Publish or Manage Publication icons in the toolbar.\n\nYou must republish your taxonomy page every time you:\n\nEdit the taxonomy page.\nEdit or add to the tag(s) and namespace(s) included in your taxonomy page.\n\nIf you create a new taxonomy page you must first add a mapping to it to the paths.json file in your project.\n\nAccessing taxonomy information\n\nOnce your taxonomy is published, its information can be leveraged by the Universal Editor and made visible to your users. You can access the taxonomy as JSON data at the following address.\n\nhttps://<branch>--<repository>--<owner>.aem.page/<taxonomy-json-name>.json\n\nUse the <taxonomy-json-name> that you defined when mapping your taxonomy to the paths.json file in your project. The taxonomy data is returned as JSON data like in the following example.\n\n{\n  \"total\": 3,\n  \"offset\": 0,\n  \"limit\": 3,\n  \"data\": [\n    {\n      \"tag\": \"default:\",\n      \"title\": \"Standard Tags\"\n    },\n    {\n      \"tag\": \"do-not-translate\",\n      \"title\": \"Do Not Translate\"\n    },\n    {\n      \"tag\": \"translate\",\n      \"title\": \"Translate\"\n    }\n  ],\n  \"columns\": [\n    \"tag\",\n    \"title\"\n  ],\n  \":type\": \"sheet\"\n}\n\n\nThis JSON data will automatically update as you update the taxonomy and republish it. Your app can programmatically access this information for your users.\n\nIf you maintain tags in multiple languages, you can access those languages by passing in the ISO2 language code as the value of a sheet= parameter.\n\nExposing additional tag properties\n\nBy default, your taxonomy will contain tag and title values as seen in the previous example. You can configure your taxonomy to expose additional tag properties. In this example we will expose the tag description.\n\nUse the Sites console to select the taxonomy you created.\nTap or click the Properties icon in the toolbar.\nIn the Additional Properties section, tap or click Add to add a field.\nIn the new field enter the JRC property name to expose. In this case, enter jcr:description for the tag description.\nTap or click Save & Clos e.\nWith the taxonomy still selected, tap or click Quick Publish in the toolbar.\n\nNow when you access your taxonomy, the tag description (or whatever property you chose to expose) is included in the JSON.\n\n{\n  \"total\": 3,\n  \"offset\": 0,\n  \"limit\": 3,\n  \"data\": [\n    {\n      \"tag\": \"default:\",\n      \"title\": \"Standard Tags\",\n      \"jcr:description\": \"These are the standard tags\"\n    },\n    {\n      \"tag\": \"do-not-translate\",\n      \"title\": \"Do Not Translate\",\n      \"jcr:description\": \"Tag to mark pages that should not be translated\"\n    },\n    {\n      \"tag\": \"translate\",\n      \"title\": \"Translate\",\n      \"jcr:description\": \"Tag to mark pages that should be translated\"\n    }\n  ],\n  \"columns\": [\n    \"tag\",\n    \"title\",\n    \"jcr:description\"\n  ],\n  \":type\": \"sheet\"\n}","lastModified":"1744374153","labs":""},{"path":"/docs/universal-editor-assets","title":"Publishing pages with AEM Assets","image":"/docs/media_19e56fa359f4dd1c50c997299519eaeb754fe3645.png?width=1200&format=pjpg&optimize=medium","description":"When editing content for the Universal Editor, you of course can select assets from AEM Assets. When you publish your content to Edge Delivery Services, ...","content":"style\ncontent\n\nPublishing pages with AEM Assets\n\nWhen editing content for the Universal Editor, you of course can select assets from AEM Assets. When you publish your content to Edge Delivery Services, the related AEM Assets content is published as well.\n\nTo ensure this seamless behavior, AEM and Edge Delivery Services must have proper access to AEM Assets in order to publish, and the assets, especially images and videos, must adhere to the limits of Edge Delivery. This includes:\n\nEnsuring that assets folders are accessible.\nAssigning a proper configuration to an asset folder (as required).\nResizing assets to be within the supported limits (as required)\n\nThis document describes how to configure these two points.\n\nEnsuring assets folders are accessible\n\nWhen publishing pages from AEM to Edge Delivery Services, a technical account is used. This account, with a name in the format <hash>@techacct.adobe.com, is created automatically as a user in AEM by Cloud Manager whenever you first publish a page created with the Universal Editor.\n\nWhen you upload assets to a folder in AEM Assets, these will be accessible to this technical account by default, as long as you don’t make the folder private. If you use private asset folders, make sure you grant access to the folder to the technical account.\n\nAssigning a Proper Configuration to an Asset Folder\n\nGenerally assuring that your technical account has read access to your assets in AEM Assets is sufficient for publishing your assets along with your pages to Edge Delivery Services.\n\nAny asset is published at its individual path automatically to the Edge Delivery Services sites they are used on, as long as the path to the asset folder is included in the site’s path mapping. For example, a PDF /content/dam/site/en/terms-of-use.pdf used in a multi-regional site will automatically be published to site.co.uk and site.eu if they both include the path /content/dam/site/en/ in their respective path mapping, and reference the asset on any page.\n\nAdditional configuration is needed, however, when you want to publish assets to a site individually, that are not referenced by a site.\n\nTo support both this use case, a configuration must be assigned to the AEM Assets folder.\n\nSign into your AEM authoring environment.\nUnder Sites select the site where you are publishing your assets or the site with which the assets will be associated.\nTap or click Properties in the tool bar.\nOn the Advanced tab in the properties window, take note of the configuration in the field Cloud Configuration.\nThis is created automatically when you create your site in the format /conf/<site-name>.\nTap or click Cancel in the properties window and navigate to Assets → Files and select your AEM Assets folder.\nTap or click Properties in the tool bar.\nOn the Cloud Services tab of the properties window, in the Cloud Configuration field, select the same configuration as you noted previously.\nTap or click Save & Close.\nResizing Assets to be within Supported Limits of Edge Delivery Services\n\nAEM Assets supports assets in various formats and sizes, possibly being too big to be reasonably served as web-optimize renditions. Edge Delivery Services, however, enforces limits on which formats, image and video sizes are supported.\n\nTo guarantee a seamless integration of AEM Assets with Edge Delivery Services, the assets can either be downscaled by content creators locally and uploaded to AEM Assets in the supported size, or a processing profile can be used to prepare the asset to be used for Edge Delivery.\n\nIn the latter case AEM Assets will create static renditions that will automatically be used instead of the original asset, when the original asset exceeds the supported limits.\n\nCreating a Processing Profile\nSign in to your AEM authoring environment\nUnder Tools navigate to Assets and select Processing Profiles\nTap or click Create to create a new processing profile or select an existing one and tap or click on Edit\nAdd two image renditions to the processing profile: edge-delivery-services-jpeg with the extension jpeg and the edge-delivery-services-png with the extension png.\nConfigure a maximum width and height to 2000x2000 pixels for both, and a quality of 100 for the jpeg rendition.\nConfigure the included mime type to image/jpeg for the jpeg rendition, and to image/png for the png rendition respectively.\nTap or click Save\n\nYou can follow the same steps to add a video rendition to downscale and downsample videos as well.\n\nSelect the previously created processing profile in the overview, and tap or click Edit\nIn the wizard, navigate to the Video tab\nAdd a rendition called edge-delivery-services-mp4 and select the mp4 extension\nConfigure the Bitrate to be 300 and (optionally) set a reasonable width for your intended use case\nTap or click Save\n\nAssigning the Processing Profile to an Asset Folder\n\nTo use the previously created processing profile, it has to be assigned to an asset folder and the assets within that folder have to be reprocessed.\n\nUnder Assets tap or click on Files\nNavigate to the folder that contains your assets and select it\nTap or click Properties\nNavigate to the Asset Processing tab\nIn the field set Processing Profile, select the previously created processing profile\nTap or click Save & Close\n\nEvery new asset uploaded to this folder or any sub folder will be processed using the assigned processing profile, and the Edge Delivery Services specific asset renditions will be generated. Existing assets, however, have to be reprocessed.\n\nTroubleshooting\nI am getting an error message that some images exceed the allowed limit of 10 MB\n\nEdge Delivery Services enforces limits to the published content. Publishing content that exceeds these limits fails. Follow the steps above to resize your images.\n\nI am getting an error message that an asset.jpg cannot be previewed because it is of type image/png\n\nEdge Delivery Services enforces that images have to be published with an extension matching the type of the image. A JEPG for example has to be published with a .jpg or .jpeg extension, and a PNG has to be published with a .png extension. To resolve this issue, simply rename the image so that the extension matches the type of the image.\n\nI am getting an error message that an image.webp is not a supported file type\n\nWhile Edge Delivery Services serves web-optimized images as WEBP when a browser supports this image format, it does not support the upload of such images. To resolve this issue, simply reupload the image with one of the supported image file types.","lastModified":"1756820935","labs":""},{"path":"/developer/repoless-authoring","title":"Reusing code across sites with AEM authoring as your content source","image":"/developer/media_1bb754ba4f66c9259ee3ad879b788c31e48609984.png?width=1200&format=pjpg&optimize=medium","description":"By default, AEM is tightly bound to your code repository, which meets the majority of use cases. However you may have multiple sites that differ ...","content":"style\ncontent\n\nReusing code across sites with AEM authoring as your content source\n\nBy default, AEM is tightly bound to your code repository, which meets the majority of use cases. However you may have multiple sites that differ mostly in their content, but could leverage the same code base.\n\nRather than creating multiple GitHub repositories and running each site off a dedicated GitHub repository while keeping them in sync, AEM supports running multiple sites from the same codebase.\n\nThis simplified setup, which eliminates the need for code replication is also known as “repoless”, because all but your first site don’t need a GitHub repository of their own.\n\nIf your project requires the repoless flexibility of code reuse across sites, you can activate the feature.\n\nRegardless of how many sites you want to ultimately create in a repoless fashion, you must create your first site, which serves as your base site. This document explains how to create your first site for repoless use.\n\nPrerequisites\n\nTo take advantage of this feature, make sure you have done the following.\n\nYour site is already fully set up by following the document Getting Started – Universal Editor Developer Tutorial.\nYou are running AEM as a Cloud Service 2025.4 at a minimum.\nSet up the configuration service by following the document Setting up the configuration service.\nActivate repoless feature\n\nThere are several steps to activate repoless functionality for your project. Please substitute your own site and GitHub org information appropriately.\n\nRetrieve access token\nSet up configuration service\nAdd site configuration and technical account\nUpdate AEM configuration\nAuthenticate site\n\nThis document details each of these steps.\n\nRetrieve access token\n\nYou will first need an access token to use the configuration service and configure it for the repoless use case.\n\nGo to https://admin.hlx.page/login and use the login_adobe address to login with the Adobe identity provider.\nYou will be forwarded to https://admin.hlx.page/profile.\nBy using your browser’s developer tools, copy the value of the auth_token cookie that the Admin Service sets.\n\nOnce you have your access token, it can be passed in the header of cURL requests in the following format.\n\n-H 'x-auth-token: <your-token>'\n\nConfigure your content and code sources\n\nThe configuration service needs to know where to find your code and AEM content. Execute this cURL command to create (PUT) or update (POST) a site to use your code and AEM content.\n\ncurl -X PUT https://admin.hlx.page/config/<your-github-org>/sites/<your-aem-project>.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: <your-token>' \\\n  --data '{\n  \"code\": {\n    \"owner\": \"<your-github-org>\",\n    \"repo\": \"<your-aem-project>\",\n    \"source\": {\n      \"type\": \"github\",\n      \"url\": \"https://github.com/<your-github-org>/<your-aem-project>\"\n    }\n  },\n  \"content\": {\n    \"source\": {\n      \"url\": \"https://author-p<your-programID>-e<your-environmentID>.adobeaemcloud.com/bin/franklin.delivery/<org>/<site>/main\",\n      \"type\": \"markup\",\n      \"suffix\": \".html\"\n    }\n  }\n}'\n\nAdd path mapping for site configuration and set technical account\n\nYou need to create a site configuration and add it to your path mapping.\n\nCreate a new page at the root of your site and choose the Configuration template.\nYou can leave the configuration empty with only the predefined key and value columns. You only need to create it.\nCreate a mapping in the public configuration to the site configuration using a cURL command similar to the following.\ncurl --request POST \\\n  --url https://admin.hlx.page/config/<your-github-org>/sites/<your-aem-project>/public.json \\\n  --header 'x-auth-token: <your-token>' \\\n  --header 'Content-Type: application/json' \\\n  --data '{\n    \"paths\": {\n        \"mappings\": [\n            \"/content/<your-site-content>/:/\",\n            \"/content/<your-site-content>/configuration:/.helix/config.json\"\n   ],\n        \"includes\": [\n            \"/content/<your-site-content>/\"\n        ]\n    }\n}'\n\nValidate that the public configuration has been set and is available with a cURL command similar to the following.\ncurl 'https://main--<your-aem-project>--<your-github-org>.aem.live/config.json'\n\nOnce the site configuration is mapped, you can configure access control by defining your technical account so it has privileges to publish.\n\nSign into the AEM author instance and go to Tools → Cloud Services → Edge Delivery Services Configuration and select the configuration that was automatically created for your site and tap or click Properties in the tool bar.\nIn the Edge Delivery Services Configuration window, select the Authentication tab and copy the value for the technical account ID.\nIt will look similar to <tech-account-id>@techacct.adobe.com\nThe technical account is the same for all sites on a single AEM author environment.\nSet the technical account for your repoless configuration with a cURL command similar to the following, using the technical account ID that you copied.\nAdapt the admin block to define the users who should have full administrative access to the site.\nIt is an array of email addresses.\nThe wildcard * can be used.\nSee the document Configuring Authentication for Authors for more information.\ncurl --request POST \\\n  --url https://admin.hlx.page/config/<your-github-org>/sites/<your-aem-project>/access.json \\\n  --header 'Content-Type: application/json' \\\n  --header 'x-auth-token: <your-token>' \\\n  --data '{\n    \"admin\": {\n        \"role\": {\n            \"admin\": [\n                \"<email>@<domain>.<tld>\"\n            ],\n            \"config_admin\": [\n                \"<tech-account-id>@techacct.adobe.com\"\n            ]\n        },\n        \"requireAuth\": \"auto\"\n    }\n}'\n\n\nSince you now use the configuration service, you can remove fstab.yaml and paths.json from your Git repository.\n\nNOTE: By using the configuration service and exposing the path mapping via config.json, the path.json file is ignored.\n\nOnce AEM is configured for repoless use, you must use the configuration service and provide a valid config.json with the paths mapping.\n\nUpdate AEM configuration\n\nNow you are ready to make the necessary changes to your Edge Delivery Services in AEM.\n\nSign into the AEM author instance and go to Tools → Cloud Services → Edge Delivery Services Configuration and select the configuration that was automatically created for your site and tap or click Properties in the tool bar.\nIn the Edge Delivery Services Configuration window, change project type to aem.live with repoless config setup and tap or click Save & Close.\n\nReturn to your site using the Universal Editor and ensure that it still renders properly.\nModify some of your content and re-publish.\nVisit your published site at https://main--<your-aem-project>--<your-github-org>.aem.page/ and verify that the changes are properly reflected.\n\nYour project is now set up for repoless use.\n\nTroubleshooting\n\nThe most common issue encountered after configuring the repoless use case is that pages in the Universal Editor no longer render or you receive a white page or a generic AEM as a Cloud Service error message. In such cases:\n\nView the source of the rendered page.\nIs there actually something rendered (correct HTML head with scripts.js, aem.js, and editor-related JSON files)?\nCheck the AEM error.log of the author instance for exceptions.\nThe most common issue is the page component fails with 404 errors.\nconfig.json or paths.json can not be loaded\ncomponent-definition.json etc. can not be loaded\nRepoless use cases\n\nNow that your base site is configured for repoless usage, you can create additional sites that leverage the same code base.\n\nMulti site management with AEM authoring as your content source\n\nCreate sites for multiple languages and markets from the same source documents\n\nRepoless stage and prod environments with AEM authoring as your content source\n\nUse repoless to manage multiple environments\n\nConfiguring site authentication for AEM authoring as your content source\n\nWhen you author using AEM Sites and Universal Editor, you also must enable it in your AEM environment.\n\nUp Next\n\nPath mapping for AEM authoring as your content source","lastModified":"1769169638","labs":""},{"path":"/developer/repoless-multisite-manager","title":"Multi site management with AEM authoring as your content source","image":"/developer/media_1d1908452b63ceba075d440ca33359295be93b76b.png?width=1200&format=pjpg&optimize=medium","description":"Multi Site Manager (MSM) and its Live Copy features enable you to use the same site content in multiple locations, while allowing for variations. You ...","content":"style\ncontent\n\nMulti site management with AEM authoring as your content source\n\nMulti Site Manager (MSM) and its Live Copy features enable you to use the same site content in multiple locations, while allowing for variations. You can author content once and create Live Copies. MSM maintains live relationships between your source content and its Live Copies so that when you change the source content, the source and Live Copies can be synchronized.\n\nYou can use MSM to create an entire content structure for your brand across locales and languages, authoring the content centrally. Your localized sites can then each be delivered by Edge Delivery Services, leveraging a central code base.\n\nIn order to leverage MSM on Edge Delivery Services with AEM authoring, you must enable the repoless feature.\n\nRequirements\n\nTo configure MSM, you must first complete the following tasks:\n\nThis document assumes that you have already created a site for your project based on the Getting Started – Universal Editor Developer Tutorial\nYou must have already enabled the repoless feature for your project.\nUse case\n\nThis document assumes that you have already created a basic localized site structure for your project. It uses the following structure for the website brand with a presence in Switzerland and Germany as an example.\n\n/content/website\n/content/website/language-masters\n/content/website/language-masters/en\n/content/website/language-masters/de\n/content/website/language-masters/fr\n/content/website/language-masters/it\n/content/website/ch\n/content/website/ch/de\n/content/website/ch/fr\n/content/website/ch/it\n/content/website/ch/en\n/content/website/de\n/content/website/de/de\n/content/website/de/en\n\n\nContent in language-masters is the source of Live Copies for the localized sites: Germany (de) and Switzerland (ch). The goal of this document is to create Edge Delivery Services sites that all use the same code base for each localized site.\n\nConfiguration\n\nThere are several steps to configuring the MSM repoless use case.\n\nUpdate AEM site configurations.\nCreate new Edge Delivery Services sites for your localized pages.\nUpdate cloud configuration in AEM for your localized sites.\n\nThis document details these steps.\n\nUpdate AEM site configurations\n\nConfigurations can be thought of as workspaces that can be used to gather groups of settings and their associated content for organizational purposes. When you create a site in AEM, a configuration is automatically created for it.\n\nYou generally want to share certain content between sites such as:\n\nTemplates created from content in the blueprint\nContent Fragment models, persisted, queries etc.\n\nYou can create additional configurations to facilitate such sharing. For the website use case, we would need configurations for the following paths.\n\n/content/website\n/content/website/ch\n/content/website/de\n\n\nThat is, you will have a configuration for the root of the website brand’s content (/content/website) used by the blueprints and a configuration used by each localized site (Switzerland and Germany).\n\nSign into your AEM authoring instance.\nNavigate to the Configuration Browser by going to Tools → General → Configuration Browser.\nSelect the configuration that was automatically created for your project (in this case website) and then tap or click Create in the toolbar.\nIn the Create Configuration dialog, provide a descriptive Name for your localized site (such as Switzerland) and for the Title use the same title of the localized size (in this case ch).\nSelect the Cloud Configuration feature and any additional features you may need for your project such as Editable Templates.\nTap or click Create.\n\nCreate configurations for each localized site you need. In the case of website, you would need to create a configuration for de as well alongside the ch configuration.\n\nOnce the configurations are created, you need to ensure that the localized sites use them.\n\nSign into your AEM authoring instance.\nNavigate to the Sites console by going to Navigation → Sites.\nSelect the localized site such as Switzerland.\nTap or click Properties in the tool bar.\nIn the page properties window, select the Advanced tab and under the Configuration heading, unselect the option Inherited from /content/website, where website is the site root.\nIn Cloud Configuration field, use the path browser to select the configuration you created for your localized site such as Switzerland under /conf/website/ch.\nTap or click Save & Close.\n\nAssign the respective configurations to the additional localized sites. In the case of website, you would need to assign the /conf/website/de configuration to the Germany site as well.\n\nCreate new Edge Delivery Services sites for your localized pages\n\nTo connect more sites to Edge Delivery Services for a multi-region, multi-language site setup, you must set up a new aem.live site for each of your AEM MSM sites. There is a 1:1 relationship between AEM MSM sites and aem.live sites with a shared Git repository and code base.\n\nFor this example, we will create the site website-ch for the Swiss presence of website, whose localized content is under the AEM path /content/website/ch.\n\nRetrieve your auth token and the technical account for your program.\nPlease see the document Reusing Code Across Sites for details on how to obtain your access token and the technical account for your program.\nCreate a new site by making the following call to the configuration service. Please consider:\nThe project name in the POST URL must be the new site name you are creating. In this example, it is website-ch.\nThe code configuration should be the same as you used for the initial project creation.\nThe content → source → url must be adapted to the name of the new site you are creating. In this example, it is website-ch.\nI.e., the site name in POST URL and the content → source → url must be the same.\nAdapt the admin block to define the users who should have full administrative access to the site.\nIt is an array of email addresses.\nThe wildcard * can be used.\nSee the document Configuring Authentication for Authors for more information.\ncurl --request POST \\\n  --url https://admin.hlx.page/config/<your-github-org>/sites/website-ch.json \\\n  --header 'Content-Type: application/json' \\\n  --header 'x-auth-token: <your-token>' \\\n  --data '{\n    \"code\": {\n        \"owner\": \"<your-github-org>\",\n        \"repo\": \"website\",\n        \"source\": {\n            \"type\": \"github\",\n            \"url\": \"https://github.com/<your-github-org>/website\"\n        }\n    },\n    \"content\": {\n        \"source\": {\n            \"url\": \"https://author-p<programID>-e<environmentID>.adobeaemcloud.com/bin/franklin.delivery/<your-github-org>/website-ch/main\",\n            \"type\": \"markup\",\n            \"suffix\": \".html\"\n        }\n    },\n    \"access\": {\n        \"admin\": {\n            \"role\": {\n                \"admin\": [\n                    \"<email>@<domain>.<tld>\"\n                ],\n                \"config_admin\": [\n                    \"<tech-account-id>@techacct.adobe.com\"\n                ]\n            },\n            \"requireAuth\": \"auto\"\n        }\n    }\n}’\n\nAdd the path mapping for your new site by making the following call to the configuration service.\ncurl --request POST \\\n  --url https://admin.hlx.page/config/<your-github-org>/sites/website-ch/public.json \\\n  --header 'Content-Type: application/json' \\\n  --header 'x-auth-token: <your-token>' \\\n  --data '{\n    \"paths\": {\n        \"mappings\": [\n            \"/content/website/ch/:/\"\n        ],\n        \"includes\": [\n            \"/content/website/ch/\"\n        ]\n    }\n}’\n\nVerify that the public configuration of your new site is working by calling https://main--website-ch--<your-github-org>.aem.page/config.json and verifying the content of the returned JSON.\n\nRepeat the steps to create additional localized sites. In the case of website, you would need to create a website-de site for the German presence as well.\n\nUpdate Cloud Configurations in AEM for Your Localized Pages\n\nYour pages in AEM must be configured to use the new Edge Delivery Sites you created in the previous section for your localized presence. In this example, content under /content/website/ch needs to know to use the website-ch site you created. Similarly content under /content/website/de needs to use the website-de site.\n\nSign into the AEM author instance and go to Tools → Cloud Services → Edge Delivery Services Configuration.\nSelect the configuration that was automatically created for your project and then the folder that was created for the localized page. In this case, that would be Switzerland (ch).\nTap or click Create > Configuration in the tool bar.\nIn the Edge Delivery Services Configuration window:\nProvide your GitHub organization in the Organization field.\nChange the site name to the name of the site you created in the previous section. In this case, that would be website-ch.\nChange project type to aem.live with repoless config setup.\nTap or click Save & Close.\nVerify your setup\n\nNow that you have made all of the necessary configuration changes, verify that everything is working as expected.\n\nSign into your AEM authoring instance.\nNavigate to the Sites console by going to Navigation → Sites.\nSelect the localized site such as Switzerland.\nTap or click Edit in the toolbar.\nEnsure that the page properly renders in the Universal Editor and uses the same code as your site root.\nMake a change to the page and re-publish.\nVisit your new Edge Delivery Services site for that localized page at https://main--website-ch--<your-github-org>.aem.page.\n\nIf you see the changes that you made, your MSM setup is working properly.\n\nPrevious\n\nReusing code across sites with AEM authoring as your content source","lastModified":"1746000939","labs":""},{"path":"/developer/repoless-environments","title":"Repoless stage and prod environments with AEM authoring as your content source","image":"/developer/media_1c1b568b1ec7baf46317e69d8861339fb4f8c3879.png?width=1200&format=pjpg&optimize=medium","description":"You may wish to set up a site for your production environment separate from your staging environment. Setting up a second site for a separate ...","content":"style\ncontent\n\nRepoless stage and prod environments with AEM authoring as your content source\n\nYou may wish to set up a site for your production environment separate from your staging environment. Setting up a second site for a separate staging and production setup is similar to the setup required for multi site management. In fact, it can be combined with MSM site structures if required.\n\nThis document uses the typical example of separate staging and production environments. You can create separate environments for any environments you wish.\n\nRequirements\n\nTo configure repoless stage and production environments, you must first complete the following tasks:\n\nThis document assumes that you have already created a site for your project based on the Getting Started – Universal Editor Developer Tutorial\nYou must have already enabled the repoless feature for your project.\nConfiguration\n\nThis document describes how to set up a separate production site for your project using the same code base. The following assumptions are made.\n\nThe staging site is already set up and you now want to create a configuration for the production site.\nThe content structure in AEM authoring is similar.\nThe same path mappings will be used for staging and production.\n\nIn this example, we are assuming that a production site has already been created for the project called wknd whose GitHub repo is also called wknd.\n\nThere are two steps to configuring a separate production site.\n\nCreate new Edge Delivery Services sites for your production environment.\nUpdate cloud configuration in AEM for your production site.\n\nThis document details these configuration steps.\n\nCreate New Edge Delivery Services Sites for Your Production Environment\nRetrieve your auth token and the technical account for your program.\nPlease see the document Reusing Code Across Sites for details on how to obtain your access token and the technical account for your program.\nCreate a new site by making the following call to the configuration service. Please consider:\nThe project name in the POST URL must be the new site name you are creating. In this example, it is website-prod.\nThe code configuration should be the same as you used for the initial project creation.\nThe content → source → url must be adapted to the name of the new site you are creating. In this example, it is website-prod.\nI.e., the site name in POST URL and the content → source → url must be the same.\nAdapt the admin block to define the users who should have full administrative access to the site.\nIt is an array of email addresses.\nThe wildcard * can be used.\nSee the document Configuring Authentication for Authors for more information.\ncurl --request POST \\\n  --url https://admin.hlx.page/config/<your-github-org>/sites/website-prod.json \\\n  --header 'x-auth-token: <your-token>' \\\n  --header 'Content-Type: application/json' \\\n  --data '{\n    \"code\": {\n        \"owner\": \"<your-github-org>\",\n        \"repo\": \"website\",\n        \"source\": {\n            \"type\": \"github\",\n            \"url\": \"https://github.com/<your-github-org>/website\"\n        }\n    },\n    \"content\": {\n        \"source\": {\n            \"url\": \"https://author-p<programID>-e<environmentID>.adobeaemcloud.com/bin/franklin.delivery/<your-github-org>/website-prod/main\",\n            \"type\": \"markup\",\n            \"suffix\": \".html\"\n        }\n    },\n    \"access\": {\n        \"admin\": {\n            \"role\": {\n                \"admin\": [\n                    \"<email>@<domain>.<tld>\"\n                ],\n                \"config_admin\": [\n                    \"<tech-account-id>@techacct.adobe.com\"\n                ]\n            },\n            \"requireAuth\": \"auto\"\n        }\n    }\n}'\n\nAdd the path mapping for your new site by making the following call to the configuration service.\ncurl --request POST \\\n  --url https://admin.hlx.page/config/<your-github-org>/sites/website-prod/public.json \\\n  --header 'x-auth-token: <your-token>' \\\n  --header 'Content-Type: application/json' \\\n  --data '{\n    \"paths\": {\n        \"mappings\": [\n            \"/content/website/:/\"\n        ],\n        \"includes\": [\n            \"/content/website/\"\n        ]\n    }\n}'\n\n\nVerify that the public configuration of your new site is working by calling https://main--website-prod--<your-github-org>.aem.page/config.json and verifying the content of the returned JSON.\n\nUpdate Cloud Configurations in AEM for Your Production Site\n\nYour production AEM must be configured to use the new Edge Delivery Sites you created in the previous section for a dedicated production site. In this example, content under /content/website on your production environment needs to know to use the website-prod site you created.\n\nSign into the AEM production instance and go to Tools → Cloud Services → Edge Delivery Services Configuration.\nSelect the configuration that was automatically created for your project.\nTap or click Properties in the tool bar.\nIn the Edge Delivery Services Configuration window:\nProvide your GitHub organization in the Organization field.\nChange the site name to the name of the site you created in the previous section. In this case, that would be website-prod.\nChange project type to aem.live with repoless config setup.\nTap or click Save & Close.\nVerify Your Setup\n\nNow that you have made all of the necessary configuration changes, verify that everything is working as expected.\n\nSign into your AEM production authoring instance.\nNavigate to the Sites console by going to Navigation → Sites.\nSelect a page in your site.\nTap or click Edit in the toolbar.\nEnsure that the page properly renders in the Universal Editor and uses the same code as your site root.\nMake a change to the page and re-publish.\nVisit your new Edge Delivery Services site for that page at https://main--website-prod--<your-github-org>.aem.page.\n\nIf you see the changes that you made, your separate production site setup is working properly.\n\nUsage\n\nOnce you have configured your project with repoless staging and production environments, you can manage code for them independently. The following diagram illustrates the relationship of the content in your various environments in AEM, Edge Delivery Services sites, and your GitHub repositories.\n\nhttps://www.aem.live/developer/repoless-environments.svg\n\nNote: While AEM Cloud Services provides all customers with Production, Stage and Development environments, Edge Delivery Services typically does not need all of these. Although setting these up using the repo-less approach is easy, this incurs the additional cost of extra tasks such as content sync and managing separate user access.\n\nConsider using Staging and Development environments if you have use cases for them, such as having a dedicated environment for user acceptance tests.\n\nPrevious\n\nReusing content across sites with AEM authoring as your content source","lastModified":"1755509641","labs":""},{"path":"/developer/authentication-setup-site-for-aem-authoring","title":"Site authentication for your visitors when using AEM Authoring","image":"/developer/media_1fcb22c690d4561fee2d7c7cde085c94216fd4095.png?width=1200&format=pjpg&optimize=medium","description":"aem.live supports token-based authentication. When using AEM authoring as your content source, site authentication is usually applied to both the preview and publish sites, but ...","content":"style\ncontent\n\nSite authentication for your visitors when using AEM Authoring\n\naem.live supports token-based authentication. When using AEM authoring as your content source, site authentication is usually applied to both the preview and publish sites, but can also be configured to only protect either site individually.\n\nNOTE: If you choose to activate site authentication, you must configure it in your AEM authoring environments as well\n\nRequirements\n\nTo configure site authentication for use with content authoring, you must first complete the following tasks:\n\nThis document assumes that you have already created a site for your project based on the Getting Started – Universal Editor Developer Tutorial.\nYou must have already enabled the repoless feature for your project.\nConfigure Site Authentication\n\nYou must first configure site authentication for your site. While doing this, take note of the following information:\n\nThe ID of the technical account\nYour site authentication token\n\nThese items are required to complete the configuration of site authentication for your AEM authoring environment.\n\nConfigure Authoring Environment\n\nOnce site authentication is configured, you can enable it in your AEM authoring environment.\n\nSign into the AEM author instance and go to Tools → Cloud Services → Edge Delivery Services Configuration and select the configuration that was automatically created for your site and tap or click Properties in the tool bar.\nIn the Edge Delivery Services Configuration window, select the Authentication tab, provide the Site Authentication Token, which you copied previously.\n\nVerify that the The technical account ID field matches the one you copied previously.\nThis field is read-only and predefined.\nThe technical account is the same for all sites on a single AEM author environment.\nTap or click Save & Close.\n\nPrevious\n\nConfiguring Site Authentication","lastModified":"1763139881","labs":""},{"path":"/docs/aem-embed","title":"Embedding Content in non-AEM experiences","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"In projects we often see the need to embed content in experiences that are not controlled by AEM. The need for that can arise from ...","content":"style\ncontent\n\nEmbedding Content in non-AEM experiences\n\nIn projects we often see the need to embed content in experiences that are not controlled by AEM. The need for that can arise from having an existing app or website that has been built using a web framework, an existing native app or simply the be a part of a migration where certain elements of a page (often the header with the navigation and the footer) need to be shared into legacy environments.\n\nThere are a number of different approaches to achieve this and we are trying to make it as easy and simple as possible for an existing application or site to consume content managed by AEM. One of the easiest ways to embed content for web and hybrid applications is to use Web Components which allows via a shadow DOM to isolate the CSS context completely from the containing environment.\n\nThis approach also allows to separate the development of design and functionality on the content side, and does not require updates or releases to the containing application in case there are changes to design, content or functionality of the embedded content and therefore has material advantages over a traditional headless development approach.\n\nUsing AEM Embed Web Component\n\nAs an application or website that needs to consume content from AEM it is as simple as adding the aem-embed web component to the project. Simply add javascript file to your project.\n\nhttps://github.com/adobe/aem-embed/blob/main/scripts/aem-embed.js\n\n…and include it with a <script> tag or load it as a module, depending what's easiest in your framework.\n\nIn a plain vanilla web project, it is as simple as adding one line to your <head> or anywhere else before the <aem-embed> tag is used.\n\n<script src=\"/scripts/aem-embed.js\" type=\"module\"></script>\n\n\nOnce the aem-embed.js is added to your document you can just use the <aem-embed> tag to reference content from an AEM project via a url= attribute. This includes the referenced content similar to a fragment.\n\nSee usage examples here for a banner use case here:\nhttps://main--aem-embed--adobe.aem.page/examples/banners.html\n\nSpecial cases for header (navigation) and footer\n\nSince it is common to share a navigation managed by AEM in other experiences on a website there is special support for headers and footers via the type= attributes.\n\nSee usage examples for header and footer here:\nhttps://main--aem-embed--adobe.aem.page/examples/header.html\n\nMaking your project embedable\nCORS\n\nDepending on the setup it might be necessary to change the CORS headers to allow access from all the consuming origins. An easy way to achieve that is to add a access-control-allow-origin header with the value of * via the headers in config service.\n\nCSS & JavaScript\n\nAs web components change the DOM structure of a page, often CSS and JavaScript behavior need to be adjusted a little bit. Please find a list of common adjustments that need to be made to an existing project to be consumable.\n\nIf you are starting with a new project, it might be easy to just fork https://github.com/aemsites/embed-example\n\nCSS :root\n\nA lot of CSS uses :root especially to define CSS variables. For variables to work as expected adding :host to the CSS selector often is the only thing that's needed.\n\nSee commit for details\n\nJavaScript loadPage() suppression\n\nTo make sure that the full page is not loaded in the case of an import happening via an aem-embed we need to suppress full page decoration in that case. This is usually done by replacing the loadPage() line in scripts.js with the following.\n\nif (!window.hlx.suppressLoadPage) loadPage(document);\n\n\nSee commit for details\n\nJavaScript header.js and footer.js changes\n\nIn cases you are planning to embed a header or footer there are potentially some changes to make sure that the header and footer can operate on content that is already in the DOM when they are called. Most header and footer implementations are responsible for loading their own content, and are not aware of content already being available in the DOM when the decoration step is started.\n\nSee examples for header and footer\n\nJavaScript setting window.hlx.codeBasePath\n\nIf your site uses a site root path, you will likely need to adjust how this is set in aem.js.\n\nSee commit for details\n\nUsing the Embed Simulator chrome extension\n\nTo make it simple and easy to simulate what the end result on an embed looks like in an existing application or website without having to make any changes to existing application or site, there is a chrome extension available for use here.","lastModified":"1745618955","labs":"AEM Sites"},{"path":"/developer/cloudflare-zero-trust","title":"Cloudflare Zero Trust Site Protection","image":"/developer/media_1b40ff4d5ece5386ad83cb1998da2bc6b2715e3e1.png?width=1200&format=pjpg&optimize=medium","description":"Integrating Cloudflare Zero Trust provides granular control over who can access your website or applications. Through its authentication and authorization features, you can ensure only ...","content":"style\ncontent\n\nCloudflare Zero Trust Site Protection\n\nIntegrating Cloudflare Zero Trust provides granular control over who can access your website or applications. Through its authentication and authorization features, you can ensure only users that meet your defined security criteria are granted access, which reduces the risk of unauthorized access and potential threats.\n\nPrerequisites\n\nTo follow this guide you will need the following\n\nA previously setup Edge Delivery Services site, for this demo, we’ll use a site called zero-trust-site in the aemsites github org.\nYou will need to be a configuration administrator of the org or site and have an authorization token to make requests to the configuration service.\n\nIf you don’t already have a site to use, create an Edge Delivery Website by following our developer tutorial.\n\nCreate a Cloudflare Site\n\nFollow the steps to set up a Cloudflare site and worker using the wrangler CLI. If you’re hosting the application on a subdomain, ensure your CNAME record is updated accordingly. In this guide, we configured a CNAME record for our example application at zero-trust.example.com.\n\nCreate a site secret\n\nCreate a site access token, this token can be used to restrict access to your edge delivery site.\n\ncurl -X POST https://admin.hlx.page/config/aemsites/sites/zero-trust-site/secrets.json \\\n  -H 'x-auth-token: <your-auth-token>'\n\n{\n  \"id\": \"MEXwhn7J7m1c29ngZqriA5N9DVIb67R_9394vsJ\",\n  \"Type\": \"hashed\",\n  \"value\": \"hlx_sQP7218fBcODiUi7NLVUH6VVT\",\n  \"created\": \"2024-08-21T18:28:54.075Z\",\n  \"lastModified\": \"2025-03-25T12:44:13.235Z\"\n}\n\n\nIn the response you will get back an id and a value. Keep track of these as you will need them in the next steps.\n\nEnable site authentication using the site token\n\nUse the token id from the response above in place of the TOKEN_ID in the body.\n\ncurl --request POST \\\n  --url https://admin.hlx.page/config/aemsites/sites/zero-trust-site/access/site.json \\\n  --header 'Content-Type: application/json' \\\n  --header 'x-auth-token: <your-auth-token>' \\\n  --data '{\n    \"allow\": [\"*@acme.com\"],\n    \"secretId\": [\"MEXwhn7J7m1c29ngZqriA5N9DVIb67R_9394vsJ\"]\n}'\n\n\nThe .page and .live origins will now be protected. Users wanting to access the site directly via these origins will now need to sign into the sidekick.\n\nSet Zero Trust Authentication methods\n\nNavigate to the Zero Trust home from the left navigation bar in Cloudflare\n\nand select Settings and the pick Authentication\n\nFrom Login methods select the Add new button\n\nThis is your opportunity to configure the identity provider you want to use for your site. For this demo we will use One-time PIN.\n\nSetup the Zero Trust Policies\n\nSelect Access → Policies\n\nSelect the Add a policy button\n\nAdd a policy with the name TestApp_EmailAccessPolicy and duration of 24 hours. Change either of these values as you see fit.\n\nUnder rules, you can pick whether there is an explicit list of emails you want to have access to the application or want to allow an entire domain access. For this example we will allow anyone with an email address ending in adobe.com to access the site.\n\nSelect Save\n\nCreate the Zero Trust Application\n\nSelect Access → Applications and click the Add an application button\n\nSelect Self-hosted\n\nEnter TestApp or any name of your choice for the application name. You can keep the session duration set to 24 hours.\n\nSelect Add public hostname and enter in the domain (and optional subdomain) you setup in the first step of this guide. For path enter *. For our demo we are setting out public hostname to zero-trust.example.com\n\nUnder Access policies click Select existing policies\n\nSelect our TestApp_EmailAccessPolicy we previously created and click Confirm.\n\nSelect Next at the bottom to get to the Login methods page\n\nBelow, you’ll find a list of all the login methods permitted for our application. By default, Accept all available identity providers is selected. However, if you deselect this option, you can choose a specific login method from the list of configured options. Currently, only the One-time PIN has been set up, so it is the only available choice.\n\nSelect Next again to get to the advanced settings page.\n\nOpen the Cross-Origin Resource Sharing (CORS) settings and enable Bypass options requests to origin to let Edge Delivery handle CORS.\n\n\n\nSelect Save\n\nNow select the 3 dots on the right of the new application and click Edit.\n\nTake note of the Application Audience Tag, we will need this in a future step.\n\nUpdate the worker\n\nNext, we’ll update our worker to validate incoming requests, ensuring only approved traffic can access your site.\n\nInstall the jose package\n\nIn the worker code we created at the start of the guide, install the jose package. This library is designed to simplify working with JWT tokens.\n\nnpm install jose\n\nEdit worker code\n\nCopy the content of this file and paste it into src/index.js\n\nImport functions from the jose package\n\nAt the top of the index file, add the following to import the required functions from the jose package.\n\nimport { jwtVerify, createRemoteJWKSet } from \"jose\";\n\nAdd token validation logic\n\nAround line 26 at the top of your handleRequest method, insert the following logic to validate the JWT token provided in the cf-access-jwt-assertion header. This snippet retrieves the token, sets up the verification context by referencing your Cloudflare Access domain and audience, and uses a remote JSON Web Key Set (JWKS) to verify the token’s integrity. If the token is missing or fails verification, the worker immediately returns an error response, ensuring that only authenticated and authorized requests proceed.\n\ntry {\n  const TEAM_DOMAIN = `https://${env.TEAM_DOMAIN}`;\n  const AUD = env.POLICY_AUD;\n  const CERTS_URL = `${TEAM_DOMAIN}/cdn-cgi/access/certs`;\n  const JWKS = createRemoteJWKSet(new URL(CERTS_URL));\n\n  const token = request.headers.get(\"cf-access-jwt-assertion\");\n  if (!token) {\n    return new Response('missing required cf authorization token', { status: 403 });\n  }\n\n  await jwtVerify(token, JWKS, {\n    issuer: TEAM_DOMAIN,\n    audience: AUD,\n  });\n} catch (error) {\n  return new Response(`Token verification failed: ${error.message}`, {status: 401});\n}\n\nUpdate wrangler.toml\n\nNote: Some of the values below are considered sensitive and must be kept confidential; ensure they are securely stored and never committed to a public repository in GitHub.\n\nUpdate route to to match your DNS setup (ex zero-trust.example.com/*)\n\nEnsure you have the correct account_id set. To find your account_id visit the Websites Dashboard in Cloudflare, select your site and it will be listed on the right hand side of the dashboard under API.\n\nEnsure the compatibility_date is set to at least 2025-03-17\n\nUpdate the ORIGIN_HOSTNAME to the edge delivery origin host name (for instance main--zero-trust-site--aemsites.aem.live)\n\nIf commented out, remove the # in front of PUSH_INVALIDATION\n\nUpdate the ORIGIN_AUTHENTICATION to the value from the site token created in a previous step (ex hlx_sQP7218fBcODiUi7NLVUH6VVTpucnHIA51yEuDFS0GE0)\n\nGet your team domain by going to the Zero Trust dashboard and opening Settings → Custom Pages. Create a new variable with the name TEAM_DOMAIN and the your team domain (for instance dylandepass.cloudflareaccess.com)\n\nCreate a new variable called POLICY_AUD and set it to the AUD value from the Zero Trust application you created above.\n\nYour wrangler.toml should look something like this.\n\nname = \"zero-trust-worker\"\n\nmain = \"src/index.mjs\"\nroute = \"zero-trust.example.com/*\"\naccount_id = \"abb72b21b09d6ac32460ac4654da1248\"\n\ncompatibility_date = \"2025-03-17\"\n\n[build]\ncommand = \"npm install\"\n\n[vars]\n# TODO: set origin host name\nORIGIN_HOSTNAME = \"main--zero-trust-site--aemsites.aem.live\"\n\n# Optional, but recommended: enable push invalidation\n# see https://www.aem.live/docs/setup-byo-cdn-push-invalidation#cloudflare\nPUSH_INVALIDATION = \"enabled\"\n\n# Optional: enable origin authentication\n# see https://www.aem.live/docs/authentication-setup-site\nORIGIN_AUTHENTICATION = \"hlx_sQP7218fBcODiUi7NLVUH6VVTpucnHIA51yEuDFS0GE0\"\n\nTEAM_DOMAIN = \"example-domain.cloudflareaccess.com\"\n\nPOLICY_AUD = \"94k128458a52641b9a5294d2cebc5kjk124021964fa01e4aal8b5c11d34948e8s0\"\n\n\nCongratulations, your site should now be protected by Cloudflare Zero Trust. Try navigating to your site and authenticating using a PIN. Upon successful authentication you should see your site.","lastModified":"1744018092","labs":""},{"path":"/docs/csp-strict-dynamic-cached-nonce","title":"Content Security Policy: strict-dynamic + (cached) nonce","image":"/docs/media_111d61890a53373bbee6ba7d73233550524a17e77.png?width=1200&format=pjpg&optimize=medium","description":"Content Security Policy is a browser feature that helps prevent and mitigate certain types of threats and attacks.","content":"style\ncontent\nContent Security Policy: strict-dynamic + (cached) nonce\n\nIntroduction\n\nContent Security Policy is a browser feature that helps prevent and mitigate certain types of threats and attacks.\n\nThere are multiple directives to restrict and control which resources can be loaded on your webpage by the browser, which other domains are allowed to iframe your content, etc.\n\nThis guide is very specific for mitigating Cross Site Scripting (XSS) attacks with a specific Content Security Policy configuration in AEM Edge Delivery Services and is not intended to be a general, exhaustive guide of all the Content Security Policy options and possibilities supported by the browser.\n\nImportant Note: Content Security Policies are meant as a last line of defense when other measures fail. It is not intended to replace the use of safe DOM APIs and proper sanitisation of user input by developers in the code. It will not cover every single attack path and it is not meant to. It is always recommended that developers code in a safe manner.\n\nThe specific content security policy configuration recommended by this guide is:\n\nscript-src 'nonce-aem' 'strict-dynamic'; base-uri 'self'; object-src 'none';\n\nAs a short summary: This content security policy will allow the execution of only those scripts which have the correct nonce value at top level (nonce component) and any script loaded by these and their descendants client-side, creating a trust chain, as long as the additional scripts are loaded via non-”parser-inserted” elements (strict-dynamic component). More information\n\nWe have chosen this specific content security policy after reviewing the following:\n\nThe findings of the following Google Research paper in regards to host based CSPs\nEase of deployment and maintenance on customer sites (through the use of strict-dynamic)\nThe number of threats it mitigates as opposed to other alternatives\nHow well it fits in the AEM Edge Delivery Services architecture\nThe recommendation coming from the Google Lighthouse Report\n\nCustomers can choose to add other directives to complement this one to meet their security needs.\n\nConfiguration\n\nStarting with version 5 of the AEM Edge Delivery Services architecture, this feature is part of the boilerplate and is included by default in all new sites built from it.\n\nThere are two components to the configuration:\n\nTrusted Scripts\nPolicy Enforcement.\n\nInstructions below show how to mark trusted scripts in your repository and then how to configure the policy to be enforced.\n\nAfter the enforcement is put in place, AEM will replace the aem part of the nonce in both the policy and the script attributes with a cryptographic random string, for every request that hits the rendering engine, generating new HTML markup from content. Once the HTML markup + headers is generated, it is cached on the CDN and is considered immutable.\n\nRequests that hit the CDN cache will see the same nonce value until cache expiration.\n\nThe result should look as follows. If you don’t see the nonce generating a random value, it means the configuration was not correctly done.\n\n1. Trusted Scripts\n\nFirst, no matter how you choose to enforce it, you must add the nonce attribute marker (nonce=\"aem\") to the scripts you trust in HTML. This can be in:\n\n1.1. head.html\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\"/>\n<script nonce=\"aem\" src=\"/scripts/aem.js\" type=\"module\"></script>\n<script nonce=\"aem\" src=\"/scripts/scripts.js\" type=\"module\"></script>\n<link rel=\"stylesheet\" href=\"/styles/styles.css\"/>\n\n1.2. Other static HTML files from your repository (for example 404.html)\n2. Policy Enforcement\n\nFor enforcing the policy you can configure it via either\n\nHeader (recommended)\n<meta> tag\n\nIt is recommended that the Content Security Policy is delivered to the browser using a header. That way, the browser will apply it before scripts get a chance to execute. When using the meta tag the browser will only apply the policy after it is encountered during parsing, providing no protection for anything that is above it in the HTML.\n\n2.1 Header Configuration\n\nYou can choose to configure either the content-security-policy or the content-security-policy-report-only headers with the following value:\n\nscript-src 'nonce-aem' 'strict-dynamic'; base-uri 'self'; object-src 'none';\n\nNote: The content-security-policy-report-only header doesn’t offer any protection, it only allows you to test if there are any scripts which would be blocked when the policy is fully enforced through error messages.\n\nThis can be done using one of the following methods:\n\n2.1.1 The configuration service\n\ncurl -X POST https://admin.hlx.page/config/acme/sites/website/headers.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: <your-auth-token>' \\\n  --data '{\n\t\"/**\": [\n      {\n        \"key\": \"content-security-policy\",\n        \"value\": \"script-src 'nonce-aem' 'strict-dynamic'; base-uri 'self'; object-src 'none';\"\n      }\n    ]\n}'\n\n\n2.1.2 Headers sheet\n\n2.1.3 A specific attribute for the meta tag move-to-http-header=\"true\" (only for content-security-policy and only when using a nonce based CSP).\n\nThis method can be useful for trying out the configuration in a branch, but has the disadvantage that it needs to be added to head.html and every other static HTML file from your repository.\n\n<meta\n  http-equiv=\"content-security-policy\"\n  content=\"script-src 'nonce-aem' 'strict-dynamic'; base-uri 'self'; object-src 'none';\"\n  move-to-http-header=\"true\"\n>\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\"/>\n<script nonce=\"aem\" src=\"/scripts/aem.js\" type=\"module\"></script>\n<script nonce=\"aem\" src=\"/scripts/scripts.js\" type=\"module\"></script>\n<link rel=\"stylesheet\" href=\"/styles/styles.css\"/>\n\n2.2 Meta Tag Configuration\n\nYou can configure the content security policy also as a meta tag. This is a browser out of the box feature, but it is considered less secure than using a header.\n\n<meta\n  http-equiv=\"content-security-policy\"\n  content=\"script-src 'nonce-aem' 'strict-dynamic'; base-uri 'self'; object-src 'none';\"\n>\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\"/>\n<script nonce=\"aem\" src=\"/scripts/aem.js\" type=\"module\"></script>\n<script nonce=\"aem\" src=\"/scripts/scripts.js\" type=\"module\"></script>\n<link rel=\"stylesheet\" href=\"/styles/styles.css\"/>\n\n3. A note on BYO CDN - Cloudflare\n\nIf you are using Cloudflare as BYO CDN in front of AEM, please ensure that you are using the latest version of the Cloudflare Worker code, which removes the content-security-policy header for 304 response codes.\n\nThis ensures that the CDN does not return a content-security-policy header in a 304 response with a nonce value that conflicts with what your users already have in their browser cache.\n\nTechnical Deepdive - FAQ\n1. Does caching the nonce render the content security policy ineffective?\n\nNo, given a set of assumptions, which hold true for typical sites implemented with AEM Edge Delivery Services.\n\nAssumptions:\n\nWebsites do not make modifications server-side to the HTML produced by AEM Edge Delivery Services (for example in the customer’s CDN).\nunsafe-eval, unsafe-hashes are not added to the content security policy.\n\nCustomers with implementations which break these assumptions should evaluate the effectiveness of the content security policy in the context of the new set of conditions specific for their website.\n\nBreaking this down into classes of Cross Site Scripting Attacks:\n\n1. Stored XSS + Reflected XSS\n\nThese types of Cross Site Scripting attacks occur through injection server side.\n\nEvery new request that hits the AEM Edge Delivery Services’ HTML rendering pipeline will always receive a new nonce value.\nThis effectively mitigates these types of attacks, because attackers trying to inject a <script> tag server side into the HTML cannot guess the new value of the nonce.\n\n2. DOM Based XSS\n\nThis type of Cross Site Scripting attack occurs client side, when the client side JavaScript injects user input (from query parameters, url fragments, author supplied information in HTML) without proper context specific encoding/sanitization into a unsafe DOM APIs which interpret the input as HTML or JavaScript, instead of text.\n\nThe list below shows which use of DOM APIs* are considered mitigated and which aren't.\n\n*Tested with Chrome, Mozilla, Safari\n\n2.1 Mitigated even if the nonce is cached\n\n✅ HTML inlined attribute event handlers (onclick, onblur, onload, onerror etc.). e.g: <img src='x' onerror='alert(\"xss\")'>\n\nWhen strict-dynamic is present, browsers will ignore unsafe-inline directive and will not execute inlined even handlers.\n\n✅ javascript: navigations\n\njavascript: protocol in the href/src attributes (e.g. <a href=\"javascript:alert(1)\" >click me!</a>)\nlocation.href / location.assign() / location.replace (e.g. location.href = 'javascript:alert(\"xss\")' )\n\nWhen strict-dynamic is present, browsers will ignore unsafe-inline directive and will not execute this type of JavaScript.\n\n✅ Eval based XSS - (obviously, as long as unsafe-eval is not present in the policy)\n\neval / setTimeout / setInterval / Function\n\n✅ .innerHTML, .outerHTML, .insertAdjacentHTML, .setHTMLUnsafe\n\nModern browsers should not execute <script> tags injected through these DOM APIs, according to the MDN documentation.\n\nThis leaves just\n\nHTML inlined event handlers: which as specified above are mitigated\njavascript: navigations: which as specified above are mitigated\n\n2.2 not protected, because the nonce is cached\n\nThe following DOM APIs remain vulnerable, because looking up the value of the cached nonce can lead to a successful XSS.\n\n❌ document.write / document.writeln\n\nThe MDN documentation strongly discourages the use of these APIs.\n\n2.3 not protected, regardless if the nonce is cached\n\n❌ document.createRange().createContextualFragment\n\nwhether the nonce is cached/known is irrelevant, scripts injected using this API are always executed when strict-dynamic.\n\nCurrently reported to:\n\nhttps://github.com/w3c/webappsec-csp/issues/708\nhttps://issues.chromium.org/issues/376479689\n\n❌ Unsanitized user input in the following scenarios is not mitigated as the use is permitted by strict-dynamic\n\nsrc attribute of a <script> tag created by document.createElement('script')\nbody of a <script> tag created by document.createElement('script')\nusage of import API\n\nThe general recommendation in these cases is simply that no user input should reach these places (either url path, query parameters, fragment or even from the HTML), as they are extremely difficult to mitigate even with the strictest CSPs.\n\n2.4. not protected by CSPs in general\n\n❌ Script-less attacks: HTML injection, DOM clobbering, etc.\n\nIt is important to note that CSPs don’t block every type of web attack and that’s why they are meant as an additional defense, rather than your main defense.\n\n2. Why isn’t the cache simply disabled when the nonce is present?\n\nLike with any engineering solution, the content security policy and protection it provides needs to be put in the context of where it is used and what are the trade-offs.\n\nIn AEM Edge Delivery Services the highly efficient use of the CDN cache is a key component of how Adobe delivers high performance, reliability and scalability.\n\nWith the current data outlined above, our understanding is that the theoretical disabling of the cache would benefit only a highly discouraged edge case (the usage of document.write / document.writeln), as the other cases don’t seem to depend on whether the nonce is cached or not.\n\n3. Why not use hash based content security policies, which work with caching?\n\nThere are a couple of reasons that make the use of hashes less efficient for this architecture:\n\n3.1. We observed that when hashes are used, the import browser API doesn’t work. This is a key component in the AEM Boilerplate and how the client side rendering is structured.\n\n3.2. The second problem is that the policy (which sits in either the HTML header or body) would have a separate caching lifecycle from the scripts referenced in the HTML. This can cause the rendering of the website to break if for any reason the hash from the policy is outdated, compared to the script delivered. This felt like a too high risk for the uptime of the site, especially when the nonce alternative does not suffer from this problem. Since the nonce header and HTML body are always stored together in the cache, this problem should not appear.\n\n4. Why use this content security policy instead of a host based one?\n\nThe following research paper from Google has found the following problems, which we could confirm in practice for AEM Edge Delivery Services customers that did have host based content security policies:\n\n4.1. Presence of unsafe-inline for inline scripts, this makes trivial DOM based XSS possible, since any unsanitised user input that ends up in a sink like innerHTML can be exploited.\n\n4.2. Presence of domains where anybody can without restrictions place their own scripts. This presents a very low barrier for an attacker to instead of serving a specific script from a denylisted domain, to just upload it to one that’s allowlisted. Making the advantages over strict-dynamic in practice quite limited.\n\nThe host based content security policies are also quite hard to maintain, so we were looking for an alternative which works easily for a majority of customers, without the need for constant maintenance.\n\nIf you have script-src 'self'; object-src 'none'; base-uri 'self'; you are probably safer, but it means you cannot load any tag manager and any script from outside domains, which we haven't seen this yet in practice.\n\n5. Can’t an attacker simply add another script with nonce=\"aem\" in the HTML, before it is replaced with the actual nonce value?\n\nNo. The nonce=\"aem\" attribute is not the main way we determine which scripts to add the new unique nonce value to.\n\nWe only apply the unique nonce value to scripts coming from head.html and static HTML files from your repository. These resources are considered trusted and controlled only by your developers. It is considered that if attackers can change these, they already can modify any code in your repository, thus trying to prevent XSS is no longer a concern. Scripts that could theoretically end up injected through any currently unknown method in the HTML will not receive the correct nonce value, even if they have the nonce=\"aem\" attribute.\n\nThe nonce=\"aem\" is rather a fallback mechanism, in case Adobe ever needs to deactivate this feature, your site continues to be served, without downtime, because the aem nonce value is present in both script attributes and policy enforcement, even if the protection is degraded.\n\n6. Can I see example sites that use it already in production?\n\nYes, of course. Here are Adobe sites running in production, with different CDNs that are using this content security policy:\n\nFastly: https://www.aem.live\nCloudflare: https://da.live\nAkamai: https://blog.adobe.com\n7. I found a bypass/vulnerability which is not covered here. Can I report it?\n\nAbsolutely! Trust, but verify!\n\nWe appreciate feedback regarding any inaccuracies of this guide or content security policy approach and thank you for the time to report it to us!\n\nSecurity measures work best when they are peer-reviewed and limitations well understood!\n\nPlease disclose responsibly vulnerabilities, bypasses or findings using the following two possibilities:\n\nReach out to Adobe using your dedicated Slack channel if you are already a customer\nNotify our Adobe Security team at psirt@adobe.com\n\nIt is preferred that your report is also accompanied by a proof of concept example bypass, so we understand best your concern.","lastModified":"1770314227","labs":""},{"path":"/developer/byo-git","title":"Bring your own git","image":"/developer/media_10dd64f8ce455a4d7d4248e17eca3ce3a5602a040.png?width=1200&format=pjpg&optimize=medium","description":"The following steps illustrate how Cloud Manager enables organisations to use their own Git repositories, beyond GitHub.com, for deploying code to Edge Delivery Services.","content":"style\ncontent\n\nBring Your Own Git\n\nEdge Delivery Services recommends using GitHub or GitHub Enterprise Cloud to host and deploy project code for frictionless adoption and seamless integration. If you are already an AEM customer and your organization is unable to use GitHub for the source code, you can now use Cloud Manager as an intermediary and configure your own git based repository service there.\n\nThe following git based repository vendors are currently supported via Cloud Manager:\n\nGitHub Enterprise (self-hosted version only)\nBitBucket (cloud version only)\nGitLab (cloud and self-hosted version)\nAzure DevOps (cloud version only)\n\nWe’re currently supporting additional vendors in an Alpha phase. To enable support for them, contact us at cloudmanager_byog@adobe.com\n\nAdobe Hosted Repository (via Cloud Manager)\n\nNote: Edge Delivery Services expects a main branch to be present in your repository for production code.\n\n1. Prepare your Edge Delivery Services Site\n\nBefore you can configure your external repository, you need to have an aem.live org registered in the configuration service and have an admin user who can perform updates to the site configuration.\n\nIf you created the site using the one-click functionality in Cloud Manager, you will not have admin permissions for the resulting site, which will prevent you from running certain API calls. The simplest way to begin is by following the developer tutorial, which will create a site and automatically assigns you the admin role within your organization.\n\nPlease note: even if you don’t use github.com as the source repository, the name of the aem.live org still needs to exist as a github.com org controlled by you. This ensures that nobody can (ab)use the same organization name.\n\n2. Configure your External Repository in Cloud Manager\n\nAs a first step, configure your external repository in Cloud Manager. If you are new to AEM, see here for an introduction and to find out how to access Cloud Manager.\n\nOnce the repository ownership is confirmed and the status is set to ready, you will need to set up a webhook for your git repository. This will allow Cloud Manager to receive push event notifications whenever changes are made to your repository. You can find the webhook details in Cloud Manager, and they are unique for each repository.\n\nThe webhook requires an API key, which is the same key used for interacting with any Cloud Manager API. For more details on generating the Cloud Manager public API key, refer to the public documentation. Additionally, a webhook secret must be configured in your git vendor solution when creating the webhook. This secret will be used to sign each event sent to Cloud Manager.\n\nIf your git repository is not publicly accessible, you can request a list of IPs used by Cloud Manager. To do this, send an email to cloudmanager_byog@adobe.com from the email address associated with your Adobe ID. Be sure to specify which Git platform you want to use and whether you are using a private, public, or enterprise repository structure.\n\n3. Configure your Edge Delivery Site in Cloud Manager\n\nOnce you have your aem.live site, you need to onboard it into Cloud Manager and validate its ownership. For more details, please refer to the Cloud Manager public documentation.\n\nIf at least one external Git repository with a READY status is onboarded in Cloud Manager, the \"Bring Your Own Git\" option will become available on an Edge Delivery site. When this option is selected, a pop-up window will appear, allowing you to choose which repository to use as the source code for the site.\n\nOnce the repository is selected, Cloud Manager will provide a secret that must be added in your site configuration. Be sure to copy the secret and store it securely, as it won't be visible again in Cloud Manager. If you reconfigure the site, Cloud Manager will generate a new secret.\n\n4. Configure your AEM Site to use Cloud Manager\n\nNext, you need to add the code repository to your site configuration. If you are new to Edge Delivery Services, it is recommended that you familiarize yourself with the Admin API.\n\ncurl -v -X POST https://admin.hlx.page/config/<org>/sites/<site>/code.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: <your-auth-token>' \\\n  --data '{\n    \"source\": {\n      \"url\": \"https://cm-repo.adobe.io/api\",\n      \"raw_url\": \"https://cm-repo.adobe.io/api/raw\",\n      \"owner\": \"<program-id>\",\n      \"repo\": \"<repository-id>\",\n      \"type\": \"byogit\",\n      \"secretId\": \"cm-byog\"\n    }\n  }'\n\n\nYou will need to replace <org> and <site> with your actual values from the configuration service, and <program-id> and <repository-id> with the actual values from Cloud Manager. The latter details can be easily found in the “Configure Webhook” section of the repository.\n\nYou also need to create a secret named \"cm-byog\" using the Admin API. The value of this secret should be the one provided by Cloud Manager in the previous step.\n\ncurl -v -X POST https://admin.hlx.page/config/<org>/sites/<site>/secrets/cm-byog.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: <your-auth-token>' \\\n  --data '{\n    \"value\": \"<secret from cloud manager>\"\n  }'\n\n5. Trigger your first sync\n\nIn Cloud Manager, select the \"Sync Code\" option from your site's dropdown menu.\n\nIn the popup that appears, choose the branch you wish to sync.\n\nThis action will update your Edge Delivery Site with the code from your private git repository and the selected branch. For more details about the sync jobs, you can use the following Admin API.\n\ncurl -X GET https://admin.hlx.page/job/<org>/<site>/<branch>/code \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: <your-auth-token>'\n\n6. Verify your AEM Site\n\nBrowse your site at https://main–{site}--{org}.aem.page\nto ensure the initial synchronization from your external repository has worked as expected.\n\nTo test continuous code synchronization, make a change to one of your code files (ideally something easy to spot in the browser) and push it to the main branch of your external repository and ensure the change is reflected on your AEM Site.\n\nNote: Code synchronization happens asynchronously and your changes may not be reflected immediately. Disable your browser cache or use an incognito window for testing to circumvent your browser’s cache and force it to fetch all resources from the server.\n\nRepoless - One codebase, many sites\n\nIf you want to use the same code across multiple sites, you can leverage the Repoless feature in Edge Delivery Services, which is fully compatible with any external Git repository.\n\nFor the second site, configure it as shown below. There is no need to set the “cm-byog” secret. Make sure to reuse the same <program-id> and <repository-id> as in the first site.\n\ncurl -v -X POST https://admin.hlx.page/config/<org>/sites/<site_2>/code.json \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: <your-auth-token>' \\\n  --data '{\n    \"source\": {\n      \"url\": \"https://cm-repo.adobe.io/api\",\n      \"raw_url\": \"https://cm-repo.adobe.io/api/raw\",\n      \"owner\": \"<program-id>\",\n      \"repo\": \"<repository-id>\",\n      \"type\": \"byogit\"\n    }\n  }'\n\n\nAll code changes will now be automatically propagated to every site configured to use the repository onboarded in Cloud Manager as its code source.\n\nTroubleshooting\nThe site is still showing the old code even after a new commit has been pushed to git\n\nSteps to check:\n\nVerify the webhook configured in Step 2 (Configure your External Repository in Cloud Manager) to confirm that events are being sent to Cloud Manager.\nIf events are reaching Cloud Manager, use the following API call to check the status of the latest code sync job:\ncurl -X GET https://admin.hlx.page/job/<org>/<site>/<branch>/code \\\n  -H 'content-type: application/json' \\\n  -H 'x-auth-token: <your-auth-token>'\n\nIf you see the error message Unable to fetch hlxignore: 401, it indicates that the secret configured in Step 4 (Configure your AEM Site to use Cloud Manager) is incorrect.\n\nImportant notes:\n\nA repository onboarded in Cloud Manager can only be linked to one site at a time.\nIf the same repository is connected to another site from Cloud Manager, the original configuration (including the secret generated for the first site) will be revoked.\nTo use a single repository with multiple sites, you must use the repoless feature.","lastModified":"1773419669","labs":""},{"path":"/docs/placeholders","title":"Placeholders","image":"/docs/media_1924a42826eff0f60ff46c462d9fe3749e6a7bb66.png?width=1200&format=pjpg&optimize=medium","description":"In most websites, there are strings or variables that will be used throughout the site. Especially in sites that need to support multiple languages, it ...","content":"style\ncontent\n\nPlaceholders\n\nIn most websites, there are strings or variables that will be used throughout the site. Especially in sites that need to support multiple languages, it is not a good idea to hard code such values. Instead placeholders can be used and managed centrally.\n\nNote: depending on your site content structure, placeholders might not be in use.\n\nPlaceholders can be managed as a spreadsheet that is either in the root folder of the project or in the locales root folder in the case of a multilingual site.\n\nName the file placeholders for Google Docs.\nName the file placeholders.xlsx for SharePoint.\n\nThe spreadsheet has to contain at least two columns titled Key and Text.\n\nThe Key column is an identifier that is transformed automatically to be easily accessible via code.\nThe Text is the literal text (or string) for a placeholder with a given key.\n\nAfter making changes to your placeholder spreadsheet, you can preview your changes via the sidekick and have your stakeholders check that the new placeholders are working on your .page preview website before publishing the placeholder changes to your production website. See the Sidekick documentation for more information about switching between environments.\n\nAre you a developer and curious to learn how to use placeholders in your code? Look here.\n\nPrevious\n\nSlack Bot\n\nUp Next\n\nPush Invalidation","lastModified":"1745328784","labs":""},{"path":"/docs/integrations","title":"Integrations Overview","image":"/docs/media_1176cde3920563861963472ad71a1f1a9c817a436.png?width=1200&format=pjpg&optimize=medium","description":"Adobe Experience Manager integrates with all technologies required to create and operate a high-performing website. For some of the key integrations, we have provided detailed ...","content":"style\ncontent\n\nIntegrations Overview\n\nAdobe Experience Manager integrates with all technologies required to create and operate a high-performing website. For some of the key integrations, we have provided detailed guides.\n\nCDN Integration Overview\n\nA CDN is required to deliver content from AEM to your visitors under your domain. This overview links to guides for all supported CDNs.\n\nAuthoring Integration Overview\n\nAEM has multiple options for authoring providers, including Microsoft Word, Google Docs, and multiple Adobe-provided options. This guide is a good starting point.\n\nAEM Forms Integration\n\nAdobe Experience Manager Sites integrates with AEM Forms. This guide helps you to get started.\n\nBuilding integrations with Edge Delivery Services\n\nLearn how to build Integrations with AEM Edge Delivery Services\n\nAdobe Experience Cloud Integration\n\nConnect AEM with Adobe's marketing, analytics, and personalization tools to streamline workflows and deliver consistent customer experiences across channels.\n\nAdobe Target Integration\n\nConnect AEM with Adobe Target to test and personalize content variations, optimize user experiences based on visitor data, and deliver targeted content to specific audience segments.\n\nGoogle Analytics & Tag Manager Integration\n\nConnect AEM with Google Analytics (GA) and Google Tag Manager (GTM) to track page view and custom events configured in your GTM containers.\n\nCloudflare Zero Trust Site Protection\n\nEnhance security with Cloudflare Zero Trust authentication and authorization features that control access to your AEM site, protecting against unauthorized users and potential threats.\n\nGitHub Actions Integration\n\nGitHub Actions can be used to automate based on code and content changes.\n\nBring your Own Git\n\nIn addition to supporting GitHub.com out of the box, other git providers can be integrated with AEM, too.\n\nBring your Own Markup\n\nIn addition to the natively supported content sources, more 3rd party content sources can easily be integrated using the BYOM interface.\n\nContent Fragments from other AEM instances\n\nContent Fragments from AEM instances without Edge Delivery Services can be integrated into Edge Delivery Services using this Early Access technology.\n\nAI Coding Agents\n\nGitHub Copilot, OpenAI Codex, Google Gemini, Claude Code, Cursor, Zed, OpenCode, Kiro, Windsurf, Aider, Cline and other AI coding agents\n\nUnsupported Integrations\nUnsupported Integrations\n\nSome integrations have proven problematic in customer environments and are therefore unsupported or heavily discouraged.","lastModified":"1757426479","labs":""},{"path":"/docs/admin-apikeys","title":"Admin API Keys","image":"/docs/media_1440edf7c6f082e7b36d324d1ed8927febc5e8e6e.png?width=1200&format=pjpg&optimize=medium","description":"Learn how to create admin API keys","content":"style\ncontent\n\nAdmin API Keys\n\nThis document outlines the process of generating and managing API keys for the Admin Service API.\n\n1. Overview\n\nAPI keys are unique identifiers used to authenticate requests to the Admin Service API. They function similarly to a password, granting access to your account's resources. When making a request, the API key must be included in the request header.\n\nHow to use API keys in requests (example using cURL):\n\nYou should include your API key in the X-Auth-Token or Authorization header of your HTTP requests.\n\ncurl -si \\\n  https://admin.hlx.page/.... \\\n  -H 'X-Auth-Token: YOUR_API_KEY_HERE'\n\n\nOr\n\ncurl -si \\\n  https://admin.hlx.page/... \\\n  -H 'Authorization: token YOUR_API_KEY_HERE'\n\nKey Expiration and Rotation:\n\nFor security best practices, API keys have an expiration period. It is crucial to rotate your API keys regularly to minimize the risk of unauthorized access if a key is compromised. We recommend setting up a process to generate new keys and replace the old ones before their expiration. This can be automated or done manually, depending on your operational needs.\n\nInteroperability\n\nWhen you successfully create a new API key or import an existing one, the key is automatically enabled for your Admin Service API. There is no need to manually add the API Key ID to the access.admin.apiKeyId property of the site configuration. Note that both API Key Id sources are respected, the key listed here and in the access.admin.apiKeyId property.\n\nPermissions\n\nIn order to create, update or delete the keys, your request needs to be authenticated with an admin role.\n\n2. Create\n\nNew API keys can be created by sending a POST request to the org, profile or site config of your project.\n\nEndpoint: POST https://admin.hlx.page/config/{org}/sites/{site}/apiKeys.json\n\nRequest Body Example:\n\n{\n  \"description\": \"API key for development environment\",\n  \"roles\": [\"publish\"]\n}\n\n\nResponse Example (Success):\n\n{\n  \"id\": \"newly_generated_key_id\",\n  \"value\": \"your_new_api_key_value\",\n  \"description\": \"API key for development environment\",\n  \"created\": \"2023-10-27T10:00:00Z\",\n  \"expiration\": \"2024-10-27T10:00:00Z\"\n}\n\n\nNote: The value in the response is the actual API key. Store it securely as it will not be retrievable again through the API.\n\nNote: The API key is never stored in our system and can not be retrieved at a later time.\n\n3. Import\n\nExisting API keys can be imported using the same /apiKeys.json endpoint as above, but by including a jwt payload property in the request body. This method is typically used for migrating existing keys.\n\nEndpoint: POST https://admin.hlx.page/config/{org}/sites/{site}/apiKeys.json\n\nRequest Body Example:\n\n{\n  \"description\": \"Imported API key for legacy system\",\n  \"jwt\": \"YOUR_JWT_PAYLOAD_CONTAINING_KEY_INFO\"\n}\n\n\nNote: The API key is never stored in our system and can not be retrieved at a later time.\n\n4. List\n\nYou can retrieve a list of all existing API keys by sending a GET request to the org, profile or site config of your project.\n\nEndpoint: GET https://admin.hlx.page/config/{org}/sites/{site}/apiKeys.json\n\nR esponse Example:\n\n{\n  \"key_id_1\": {\n    \"id\": \"key_id_1\",\n    \"description\": \"API key for production service\",\n    \"created\": \"2023-01-15T09:30:00Z\",\n    \"expiration\": \"2024-01-15T09:30:00Z\"\n  },\n  \"key_id_2\": {\n    \"id\": \"key_id_2\",\n    \"description\": \"API key for internal tooling\",\n    \"created\": \"2023-03-20T14:00:00Z\",\n    \"expiration\": \"2024-03-20T14:00:00Z\"\n  },\n  \"key_id_3\": {\n    \"id\": \"key_id_3\",\n    \"description\": \"API key for development environment\",\n    \"created\": \"2023-10-27T10:00:00Z\",\n    \"expiration\": \"2024-10-27T10:00:00Z\"\n  }\n]\n\n\nNote: For security reasons, the actual API key (value property) is not returned when listing keys. Only metadata about the keys is provided.\n\n5. Update\n\nTo update the description of an existing API key, send a POST request to the respective config entry.\n\nEndpoint: POST https://admin.hlx.page/config/{org}/sites/{site}/apiKeys/{id}.json\n\nRequest Body Example:\n\n{\n  \"description\": \"Updated description for production API key\"\n}\n\n\nResponse Example (Success):\n\nJSON\n\n\n{\n  \"id\": \"existing_key_id\",\n  \"description\": \"Updated description for production API key\",\n  \"created\": \"2023-10-27T10:00:00Z\",\n  \"expiration\": \"2024-10-27T10:00:00Z\"\n}\n\n6. Delete\n\nTo delete an API key, send a DELETE request to the request to the respective config entry.\n\nEndpoint: DELETE https://admin.hlx.page/config/{org}/sites/{site}/apiKeys/{id}.json","lastModified":"1747836730","labs":""},{"path":"/docs/fragments","title":"Fragments","image":"/docs/media_12005483fba21aec5db1f6f8403a31c2fc1d30a00.png?width=1200&format=pjpg&optimize=medium","description":"Fragments are reusable chunks of site content: think headers, footers, or any element that appears on multiple pages. In Adobe Experience Manager (AEM) Edge Delivery ...","content":"style\ncontent\n\nFragments\n\nFragments are reusable chunks of site content: think headers, footers, or any element that appears on multiple pages. In Adobe Experience Manager (AEM) Edge Delivery Services, fragments have moved from “nice-to-have” add-ons to a built-in foundation of the product.\n\nMaking fragments first-class citizens eliminates duplicate markup, streamlines caching all the way to the CDN, and delivers consistent performance. Every new AEM project built with boilerplate now ships with header and footer fragments out of the box, so teams can author once and reuse everywhere with minimal effort.\n\nHow Fragments work\n\nA fragment is a standalone piece of content (authored as either a document or a spreadsheet) that can be reused across multiple pages. Instead of copying the same content into every page, authors create a single source of truth and reference it where needed. This content is then inserted into the page at the time of delivery: documents are served as semantic HTML and spreadsheets as JSON (data-driven or table-like content).\n\nBecause fragments are delivered from separate resources, they can be cached and served independently. This improves performance and reduces the load on the main page, especially for content that appears across many parts of a site.\n\nReferencing Fragments\n\nFragments can be referenced in multiple ways, depending on the use case.\n\nAutomatically by boilerplate: Fragments (like the header and footer) are programmatically pulled in by the site's display logic.\nConditionally by metadata: Fragments can also be loaded based on page metadata. For example, a promotional banner might only appear on pages using a specific template.\nManually by authors: Authors can selectively place a Fragment block with an explicit reference to a fragment document using the .aem.page or .aem.live link.\n\nDocuments in the /fragments/ folders are automatically recognized and treated as fragments. For example, a link to https://main--site--org.aem.page/fragments/my-fragment will automatically be auto-blocked and decorated without additional author effort.\n\nFor implementation details, see the fragment block documentation.\n\nWhen to use Fragments\n\nFragments are ideal for content that appears across multiple pages, but reuse isn’t the only reason to use them. Sometimes, isolating a complex section (even if it appears only once on site) can make some pages easier to manage from an authoring perspective. Creating smaller, more focused fragments can reduce visual and structural clutter.\n\nFragments can also improve delivery performance by allowing a more controlled loading pattern for content that may not be relevant to all users (like content below the fold). It is important, however, that performance considerations are not the primary driver for content modeling decisions about when to use fragments.\n\nWhen to avoid Fragments\n\nWhile fragments can offer reusability and simplicity, they also introduce a level of indirection which can sometimes make authoring less intuitive. Instead of managing all content for a page in one document, fragments require authors to deal with references, navigate between multiple documents, and understand the separate life cycles (preview/publish) between those resources. For this reason, it’s best to use fragments only when they provide a clear benefit.\n\nIn most cases, fragments are best suited to content that enhances the page but isn’t central to its meaning (like headers, footers, or supplemental sections). Primary canonical content should remain in the main document to ensure clarity for both authors and automated systems (like search engines and indexers) that are trying to identify the main content on a page.\n\nThis is especially relevant for tools like spiders and robots that don’t support JavaScript execution. Because fragments are decorated on the client side, their content may not be indexed alongside the rest of the embedding page. Failing to separate essential and auxiliary content can result in inconsistent indexing and negatively impact SEO performance.\n\nFragments and Indexing\n\nSince their primary purpose is to be inserted into other pages, it often doesn't make sense to have fragment pages indexed by search engines. or have AEM add them to your site's query index and sitemap independently. It is therefore recommended to set robots to noindex in the metadata block of the fragment document, or globally via the bulk metadata sheet:\n\nURL\t\tRobots\n*/fragments/*\tnoindex\n\nOther Types of Fragments\n\nIf you want to use Content Fragments from Adobe Experience Manager as a Cloud Service authoring in your Edge Delivery Services site you might be interested in trying the following approach to publish Content Fragments directly to Edge Delivery Services as self-contained semantic HTML.","lastModified":"1761217980","labs":""},{"path":"/docs/configuration-templates","title":"Configuration templates","image":"/docs/media_1e85aed06630489b93102500099a073c2d15f9a46.png?width=1200&format=pjpg&optimize=medium","description":"When using AEM as your content authoring source, you can use the Sites console to easily create and manage your project configuration by using a ...","content":"style\ncontent\n\nConfiguration templates\n\nWhen using AEM as your content authoring source, you can use the Sites console to easily create and manage your project configuration by using a configuration template. By leveraging the powerful features of the Sites console, your configuration can be inherited across sites using Multi-site management (MSM).\n\nPrerequisites\n\nYou must have already created your Edge Delivery Services project with AEM as your authoring source. Please see the tutorial for more information.\n\nCreating a configuration\nSign in to your AEM as a Cloud Service authoring instance, go to the Sites console, and navigate to the root of the site where you wish to create a configuration. Tap or click Create → Page.\n\nOn the Template tab of the create page wizard, tap or click the Configuration template to select it and then tap or click Next.\nPlease see the document Managing tabular data with AEM authoring as your content source for more information about templates.\n\nThe Properties tab of the wizard allows you to specify a Title for the configuration. The default value of Configuration can typically be left as-is. Tap or click Create.\n\nIn the Success dialog, tap or click Done.\nThe configuration is created in the site root.\nEditing a configuration\n\nOnce you create the configuration from the template, you must edit it to customize your configuration data.\n\nTap or click the configuration you created to select it and then tap or click Properties in the toolbar.\nAlternatively you can use the hotkey p once you have selected the configuration.\n\nIn the page properties window, you can manage your configuration information across four tabs.\nBasic\nAccess Control\nCDN\nMetadata\nClick or tap Save & Close to save your changes or Cancel to abort.\n\nEach tab exposes configuration options in a convenient way in the UI. For details about the underlying options, please see the document Project Configuration.\n\nBasic\n\nThis tab allows you to change the title of the configuration.\n\nAccess Control\n\nThis tab allows you to manage access to your project.\n\nAuthor Users - The email glob of the users with the author role\nTap or click Add to add a row\nTap or click Remove to delete a row\nAdmin Users - The email glob of the users with the admin role\nTap or click Add to add a row\nTap or click Remove to delete a row\n\nFor more information on roles, please see the document Configuring Authentication for Authors.\n\nCDN\n\nThis tab allows you to select which CDN you use with your project and define its options.\n\nCDN Vendor - Define which CDN service you will use with your project. Options vary depending on which service you select.\nAdobe Managed CDN\nFastly\nAkamai\nCloudflare\nCloudfront\nAdditional Resources\n\nThis tab allows you to define additional metadata resources for your project.\n\nAdditional Metadata - Provide a path for a metadata sheet\nTap or click Add to add a row\nTap or click Remove to delete a row\n\nFor additional information, please see the document Bulk Metadata for more information.\n\nConfiguration Inheritance\n\nWhen using AEM as your authoring source, project configuration templates fully support inheritance.\n\nFor example, you may have a single API key for your CDN for your organization. However your individual, localized sites would have different host names. You can set the API key in the blueprint, which is rolled out to your localized sites. For each site, you can break inheritance for the host name and set it for each site.\n\nFor more information about blueprints, inheritance, and MSM, please see the document Reusing Content: Multi Site Manager and Live Copy.\nFor more information on how to set up MSM for your Edge Delivery Services with AEM authoring project, please see the document Multi site management with AEM authoring as your content source.\nFor more information on breaking and reinstating inheritance in the Sites Console, please see the document Editing Page Properties.\nConfiguration templates and configuration spreadsheets\n\nConfiguration templates support a reasonable subset of configurations that are common and possible. There may be edge cases that configuration templates do not cover.\n\nIf you find that the configuration template does not support your use case you must first delete the configuration you created. You then have two options:\n\nCreate a spreadsheet called configuration and manage your settings in that spreadsheet.\nActivate the Configuration Service and manage your settings using the service.","lastModified":"1750862654","labs":""},{"path":"/developer/gtm-martech-integration","title":"Configuring Google Analytics & Tag Manager Integration","image":"/developer/media_1ba5390c0b3026987033e62fe020d3f7dae6c0332.png?width=1200&format=pjpg&optimize=medium","description":"This article will walk you through the steps of setting up an integration with Google Analytics (GA) and Google Tag Manager (GTM) . This will ...","content":"style\ncontent\n\nConfiguring Google Analytics & Tag Manager Integration\n\nThis article will walk you through the steps of setting up an integration with Google Analytics (GA) and Google Tag Manager (GTM) . This will let you automatically track page view and custom events, along with any other tags configured in your GTM containers.\n\nChoosing the Right Integration\n\nThis document covers integration with Google's marketing technology stack\n(Google Analytics 4, Google Tag Manager).\n\nLooking for Adobe Experience Cloud integration instead?\nSee our Adobe Experience Cloud Integration guide.\n\nWhen to Use This Integration\nYou're using Google Analytics as your primary analytics solution\nYou want to leverage Google Tag Manager for tag management\nYou're already invested in the Google marketing ecosystem\nIntegration Comparison\nFeature\t Google Analytics & GTM\t Adobe Experience Cloud \n Analytics\t Google Analytics 4\t Adobe Analytics \n Tag Management\t Google Tag Manager\t Adobe Experience Platform Tags \n Personalization\t Limited (via GTM)\t Adobe Target/AJO \n Data Layer\t Google Data Layer\t Adobe Client Data Layer \n Cost\t Free tier available\t Enterprise licensing \n Privacy\t GDPR/CCPA compliant\t GDPR/CCPA compliant\nIntegration Overview\n\nThe Google Analytics & Tag Manager integration provides a streamlined marketing technology stack that enables:\n\nComprehensive analytics with Google Analytics 4\nFlexible tag management through Google Tag Manager\nCustom event tracking for business-specific metrics\nPerformance-optimized loading aligned with AEM EDS phases\nPrivacy-compliant data collection\n\nThis integration is designed to work seamlessly with AEM Edge Delivery Services while maintaining optimal performance and user experience.\n\nHow It Works\n\nThe integration splits the traditional monolithic GTM approach into optimized phases:\n\nGoogle Analytics 4 Tag: Provides comprehensive web analytics and user behavior tracking\nGoogle Tag Manager: Manages all marketing tags and pixels from a centralized interface\nGoogle Data Layer: Standardizes data collection and event tracking\nPhased Loading: Separates critical tracking from delayed tag execution\nFirst-Party Data: Ensures reliable tracking while respecting privacy preferences\nRationale\n\nIn a traditional GTM implementation, one container is used to initialize tracking and load all other containers. This approach typically has a performance impact on the initial page load, degrading the Core Web Vitals (CWV).\n\nThis optimized approach attaches tracking actions to the Edge Delivery Services phases. By splitting up the loading of the libraries and containers, we have minimized the impact on page performance, while maintaining page event tracking.\n\nPerformance Benefits\nReduced initial load time by deferring non-critical tags\nImproved Core Web Vitals through phased loading\nFaster Time to Interactive by prioritizing essential tracking\nMaintained tracking accuracy while optimizing performance\nPre-requisites\n\nBefore you can use this plugin, make sure you have access to:\n\nGoogle Analytics account with GA4 property configured\nGoogle Tag Manager account with container(s) set up\n\nYou'll also need the following information:\n\nGoogle Analytics Measurement ID (format: G-XXXXXXXXXX)\nGoogle Tag Manager container ID(s) (format: GTM-XXXXXXX)\nIf using multiple containers, ensure to make note of which should be loaded in the different phases (lazy or delayed)\nGTM Container Configuration\nEnsure your GTM container has the necessary tags configured\nSet up triggers for page views and custom events\nConfigure variables for data layer values\nTest your container in preview mode before deploying\nInstallation & Configuration\nStep 1: Install the Plugin\n\nFollow the technical steps in the aem-gtm-martech GitHub repository.\n\nStep 2: Configure Consent Management\n\nMake sure you implement and pass a consentCallback according to the documentation. This is essential for GDPR/CCPA compliance.\n\nPrivacy Considerations: The integration includes built-in consent management support. Default consent is set to require explicit user permission before tracking begins.\n\nStep 3: Deploy Your Code\n\nCommit and push your code branch to trigger the deployment.\n\nStep 4: Test the Integration\n\nOpen a browser to the branch containing the plugin and verify the configuration.\n\nVerification & Testing\n\nTo verify the configuration is correct you can:\n\nBrowser-Based Testing\nUse the Google Tag Assistant Chrome extension to view the events\nUse the Google Tag Assistant site to view events in real-time\nMonitor the network tab for scripts loaded from Google\nCheck for calls to https://www.google-analytics.com/g/collect\nGoogle Analytics Verification\nCheck real-time reports in Google Analytics\nVerify page view events are being recorded\nTest custom event tracking\nConfirm conversion tracking (if configured)\nGTM Preview Mode\nUse GTM's preview mode to debug tag firing\nVerify data layer variables are populated correctly\nCheck that triggers are working as expected\nValidate tag configurations\nExpected Network Activity\n\nWhen properly configured, you should see:\n\nGA4 measurement calls to google-analytics.com\nGTM container loads from googletagmanager.com\nData layer pushes in browser console (if debugging enabled)\nPrivacy & Consent Management\n\nThe integration provides comprehensive privacy controls:\n\nDefault consent state: Requires explicit user permission\nGranular consent categories: Analytics, marketing, preferences\nConsent callback integration: Works with popular consent management platforms\nData retention controls: Configurable data retention periods\nTroubleshooting\nCommon Issues\n\nNo tracking data in Google Analytics:\n\nVerify your GA4 Measurement ID is correct\nCheck that consent has been granted\nEnsure GTM container is published, not just saved\nMonitor browser console for JavaScript errors\n\nGTM tags not firing:\n\nCheck GTM preview mode for trigger debugging\nVerify data layer variables are populated\nEnsure consent is properly granted for relevant storage types\nCheck tag configuration and triggers\n\nPerformance degradation:\n\nReview the number of tags in your GTM container\nConsider moving non-critical tags to delayed phase\nMonitor Core Web Vitals impact\nOptimize trigger conditions to reduce unnecessary tag fires\nDebugging Tools\nGTM Preview Mode: Real-time tag debugging\nGoogle Tag Assistant: Browser extension for validation\nGA4 DebugView: Real-time event monitoring\nBrowser DevTools: Network and console monitoring\nNext Steps & Related Resources\nDemo & Examples\n\nA demo site with the plugin can be found here.\n\nAdditional Documentation\nGoogle Analytics 4: GA4 Setup Guide\nGoogle Tag Manager: GTM Implementation Guide\nAlternative Integration Options\nAdobe Experience Cloud: Adobe MarTech Integration\nTechnical Resources\nGitHub Repository: adobe-rnd/aem-gtm-martech\nGoogle Resources\nTag Assistant Chrome Extension: Install from Chrome Web Store\nGoogle Tag Assistant: Online Validation Tool\nGoogle Analytics Help: Support Documentation\nGoogle Tag Manager Help: Support Documentation","lastModified":"1753897162","labs":"AEM Sites"},{"path":"/docs/error-pages","title":"Error Pages","image":"/docs/media_18e89acb8421db5896497c112181a207f10ae9e52.png?width=1200&format=pjpg&optimize=medium","description":"Adobe Experience Manager error page handling and customization options for developers","content":"style\ncontent\n\nError Pages\n\nError Pages are served to web browsers when users visit your website and encounter an error. Adobe Experience Manager allows you to control the content and behavior of certain error pages, helping users find their way.\n\n404 Errors\n\nA 404 error occurs when the page or resource being requested doesn’t exist. This can happen for any number of reasons such as old and out of date bookmarks, bad links on external websites, or just a simple typo. If you move a page or otherwise restructure your website, you should create redirects to prevent this from happening if you can.\n\nIn AEM, your 404 page is served using the 404.html file at the root of your code repository. The version of this file in the AEM Boilerplate will display your branded header and footer, but you can customize the code and content of this page as you see fit.\n\nYou can monitor the 404 errors that occur using OpTel.\n\nCustomizing 404 Content with Fragments\n\nWhen working with repoless sites, many sites that share a single codebase, you may need more control over the error content than can be provided by a single html file. The recommended way of handling this is to use a fragment that gets loaded in your 404 page, replacing the page’s main content with the fragment content.\n\nIn your scripts.js, add the following function, modifying the fragment path as needed for your site (it could also be dynamic based on things like site bulk metadata, etc.).\n\nfunction loadErrorPage(main) {\n  if (window.errorCode === '404') {\n    const fragmentPath = '/fragments/404';\n    const fragmentLink = document.createElement('a');\n    fragmentLink.href = fragmentPath;\n    fragmentLink.textContent = fragmentPath;\n    const fragment = buildBlock('fragment', [[fragmentLink]]);\n    const section = main.querySelector('.section');\n    if (section) section.replaceChildren(fragment);\n  }\n}\n\n\nThen, edit the loadEager function to call this before calling decorateMain\n\nconst main = doc.querySelector('main');\nif (main) {\n  if (window.isErrorPage) loadErrorPage(main);\n  decorateMain(main);\n  document.body.classList.add('appear');\n  await loadSection(main.querySelector('.section'), waitForFirstImage);\n}\n\n\nThis technique can be used for more than just repo-less sites. If you need to serve language specific 404 content, let authors control the content of the 404 page without developer interaction, or really any scenario where the 404 content can vary, fragments are the way to do that.\n\nPrevious\n\nBuild\n\nUp Next\n\nAnatomy of an AEM Project","lastModified":"1753901065","labs":""},{"path":"/docs/schema-structured-data","title":"Using Schema (Structured Data) as JSON-LD","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Before adding any structured data (schema), define what SEO or SERP outcome you're targeting:","content":"style\ncontent\n\nUsing Schema (Structured Data) as JSON-LD\nSet Clear Goals for Schema Implementation\n\nBefore adding any structured data (schema), define what SEO or SERP outcome you're targeting:\n\nRich Results (FAQs, Recipes, Reviews, Product snippets)\nIncreased CTR through enhanced listings\nVisibility Improvements in Google Search Console (GSC)\n\nSet clear goals in one or more metrics and make sure that there are no side-effects relative to SERP and ranking.\n\nEstablish a baseline without schema using tools like GSC, and compare against performance after schema deployment. It is likely that the impact is not measured immediately so account for at least a couple of weeks to validate the effects.\n\nNote: Google does not guarantee that structured data will be shown in search results, even if it is valid [source].\n\nTwo Approaches: Block-Based vs Page-Based Schema\n\nAEM supports two main approaches to adding schema, and which one you choose depends on the type of content and how critical it is to appear in the first-pass crawl.\n\nBlock-Based Schema is typically implemented via client-side JavaScript. It's great when you're working with content that's already visible on the page and just needs to be restructured into JSON-LD. Think FAQs, recipes, or reviews—these can be automatically converted with no extra effort for the author. Just drop in a block, and the system generates the necessary schema in the background. This is fast, lightweight, and ideal for high-velocity content creation. The tradeoff? It relies on JavaScript execution, so it might not always be picked up immediately by crawlers [source].\n\nPage-Based Schema, on the other hand, is baked into the page metadata. This approach is most useful when you need the schema to be picked up in the initial HTML crawl—such as with product or offer pages. These values usually come from a PIM, ERP, or commerce platform and are added automatically, with little to no manual input. It’s more robust for structured e-commerce content, especially when you're targeting rich results that depend on immediate visibility.\n\nQuote from Google: \"We strongly recommend using HTML for critical content that you want to be indexed quickly, as it helps ensure discovery in the initial crawl pass\" [source].\n\nHow to Test and Measure\nTools:\nGoogle Search Console > Enhancements (Rich Results performance)\nURL Inspection Tool (check rendered HTML)\nSchema Markup Validator\nSplit Testing using tools like ContentKing, or A/B publishing if supported\nMetrics to Track:\nCTR before and after adding schema\nImpression growth for pages with schema\nError or warnings in GSC Enhancements\n\nExample: Add FAQ schema to a page block and monitor CTR uplift using GSC over 2–4 weeks. Compare with similar pages without schema.\n\nBest Practice: Iterate & Promote\nStart with Page-Based schema for quick iteration without developer support.\nOnce schema is validated and shows improvement, transition to Block-Based for authoring scale.\nRegularly audit schema output for correctness and completeness.\nAlways align schema content with visible content to avoid manual penalties.\nSummary\n\nUsing schema in AEM may enhance search presence but requires a clear testing and measurement strategy. Block-based schema is ideal for repeatable visible content; page-based metadata is critical for e-commerce product data, and is useful for quick initial prototyping. Always validate and measure performance before scaling.","lastModified":"1753974914","labs":""},{"path":"/docs/storefront","title":"Commerce Storefront for Edge Delivery Services","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Learn how to build the fastest shopping experience on the web.","content":"style\ncontent\n\nHow to build a Storefront in AEM\n\nThe Commerce Storefront is an e-commerce site powered by Edge Delivery Services in AEM to create the fastest shopping experience on the web.\n\nFollow these links to learn more about the architecture of the Commerce Storefront and how to create one:\n\nAbout Commerce Storefront\nHow to create a Commerce Storefront project\nHow to add content to a Commerce Storefront","lastModified":"1754580895","labs":""},{"path":"/developer/json2html","title":"JSON2HTML for Edge Delivery Services","image":"/developer/media_17846d607e543805b0a5e179b3ca7b93de2bacbd8.png?width=1200&format=pjpg&optimize=medium","description":"Learn how to convert JSON from your backend endpoints to Edge Delivery Services friendly Semantic HTML to build dynamic pages via configuration only","content":"style\ncontent\n\nJSON2HTML for Edge Delivery Services\n\nDelivering well-structured markup is essential for any AEM site, but when working with a legacy headless CMS, you're often limited to raw JSON responses - far from the clean HTML you're aiming for.\n\nThat's where JSON2HTML comes in. This generic worker bridges the gap by transforming JSON data into fully formed HTML pages tailored for Edge Delivery Services.\n\nBest of all, it requires minimal setup - no need to deploy your own service. Here’s how it works…\n\nUsage\n\nOnce you have your JSON data, endpoint, and a mustache template to transform JSON into BYOM friendly HTML - then follow these steps to stitch everything together.\n\nStep 1: Prepare your JSON data\nStep 2: Use a Mustache template to generate Edge Delivery Friendly HTML using BYOM (Bring your own markup)\nStep 3: Add an overlay to your content source as follows:\n\"overlay\": {\n   \"url\": \"https://json2html.adobeaem.workers.dev/<ORG>/<SITE>/<BRANCH>\",\n   \"type\": \"markup\"\n },\n\nStep 4: Update the configuration using a POST call to the /config/ endpoint:\nPOST https://json2html.adobeaem.workers.dev/config/<ORG>/<SITE>/<BRANCH>\nAuthorization: token <your-admin-token>\nContent-Type: application/json\n [\n    {\n      \"path\": \"/path1/abc/def/\",\n      \"endpoint\": \"https://<some-endpoint/path>/event-{{id}}.json\",\n      \"regex\": \"/[^/]+$/\",\n      \"template\": \"/templates/category/template.html\",\n      \"headers\": {\n        \"X-API-Key\": \"your-api-key-here\",\n        \"Accept\": \"application/json\"\n      },\n      \"forwardHeaders\": [\n        \"Authorization\",\n      ],\n      \"relativeURLPrefix\": \"https://some-domain.com\"\n    },\n    {\n      \"path\": \"/path2/xyz/\",\n      \"endpoint\": \"https://<another-endpoint/path>/event-{{id}}.json\",\n      \"regex\": \"/[^/]+$/\",\n      \"template\": \"/templates/category/template.html\"\n    },\n    {\n       \"path\": \"/dynamic-pages/\",\n       \"endpoint\": \"https://www.edge-delivery-site.com/all-the-data.json\",\n       \"arrayKey\": \"data\",\n       \"pathKey\": \"URL\",\n       \"template\": \"/templates/fun-template.html\"\n    }\n ]\n\nStep 5: Preview the dynamic path that would use this overlay and it will generate the HTML based on the configuration above and you can then visit the .aem.page of that path and see the result. Once previewed, everything in Edge Delivery Services will continue to work as expected.\nConfiguration Management\n\nThe configuration is managed through a POST request to the /config/:org/:site/:branch endpoint.\n\nAuthentication\nAuthorization Header: Include your AEM Admin API token in the Authorization header\nRequest Format\nMethod: POST\nURL: https://json2html.adobeaem.workers.dev/config/<ORG>/<SITE>/<BRANCH>\nHeaders:\nAuthorization: token <your-admin-token>\nContent-Type: application/json\nBody: JSON configuration object\nConfiguration Structure\n\nThe request body should contain an array with one or more configuration objects:\n\n[\n  {\n    \"path\": \"/path1/abc/def/\",\n    \"endpoint\": \"https://<some-endpoint/path>/event-{{id}}.json\",\n    \"regex\": \"/[^/]+$/\",\n    \"template\": \"/templates/category/template.html\",\n    \"headers\": {\n      \"X-API-Key\": \"your-api-key-here\",\n      \"Accept\": \"application/json\"\n    },\n    \"forwardHeaders\": [\n      \"Authorization\",\n    ],\n    \"relativeURLPrefix\": \"https://some-domain.com\"\n  },\n  {\n    \"path\": \"/path2/xyz/\",\n    \"endpoint\": \"https://<another-endpoint/path>/event-{{id}}.json\",\n    \"regex\": \"/[^/]+$/\",\n    \"template\": \"/templates/category/template.html\"\n  }\n]\n\nConfiguration Parameters\n\nEach configuration object in the array requires the following keys:\n\npath: The URL path pattern to match for this overlay configuration. The worker will use this to determine which config to apply.\nExample: /path1/abc/def/\nRequired: Yes\nendpoint: The API endpoint URL that contains the JSON data. Use {{id}} as a placeholder that will be replaced with the ID extracted from the request URL. {{id}} is used in conjunction with the regex option. You can also inject header values into the endpoint.\nExample: https://api.example.com/event-list.json\nRequired: Yes\nOptional and advanced configurations\nregex: A regular expression pattern to extract the ID from the request URL. Should be provided as a string with forward slashes.\nExample: /[^/]+$/ (matches the last segment of the URL path)\nRequired: Only if you use `{{id}}` in the endpoint. Not required if you're using `arrayKey` and `pathKey`\ntemplate: Relative URL to a Mustache template file located under the same org/site/branch that will be used to render the JSON data. If not provided, a default semantic HTML structure will be generated.\nExample: /templates/event.html\nRequired: No (but recommended to have a template)\nDefault: Basic semantic HTML with nested divs\nheaders: Custom HTTP headers to send when fetching data from the endpoint. If not provided, it defaults to { 'Content-Type': 'application/json' }.\nExample: { 'X-API-Key': 'your-api-key-here', 'Accept': 'application/json' }\nRequired: No\nDefault: { 'Content-Type': 'application/json' }\nforwardHeaders: An array of header names to forward from the incoming request to the endpoint API call. This is useful for passing authentication tokens from Helix Admin to AEM content source for example.\nExample: [\"Authorization\"]\nRequired: No\nDefault: No incoming headers are forwarded\nrelativeURLPrefix: allows you to prefix relative URLs in the generated HTML content with a domain. Applies only to src and href attributes and only for these file types that are supported by BYOM: .mp4, .pdf, .svg, .jpg, .jpeg, .png\nExample: https://some-domain.com\nRequired: No\nDefault: No prefix will be added to relative paths\nuseAEMMapping: Uses the AEM mapping available at /config.json to rewrite the links in the resulting HTML appropriately\nExample: true\nRequired: No\nDefault: false\narrayKey: Specifies the key in the JSON response that contains an array of items. When provided, the worker will iterate through this array to get the JSON object that matches the incoming path with pathKey. This is useful for handling JSON responses that contain collections of data and you only want to select an object. Also a good option for leveraging Edge Delivery spreadsheet json as the endpoint.\nExample: \"data\" or \"events\"\nRequired: No\nDefault: null (no filtering the json data)\nWhen used with pathKey, allows to filter to the specific matching item to be used from a larger data set\npathKey: Specifies the key in each array item that should be used to match with the incoming path. This works in conjunction with arrayKey to select only a specific JSON object from an array to be used for generating the dynamic HTML.\nExample: \"URL\" or \"path\"\nRequired: No (but required if using arrayKey)\nDefault: null (no filtering the json data)\nWhen used with arrayKey, allows to filter to the specific matching item to be used from a larger data set\ntemplateApiKey: A Site Token used to authenticate when fetching the Mustache template from a protected site.\nExample: \"your-site-token\", will be in the format hlx_XXX\nRequired: No (but required if your site uses site authentication)\nDefault: No authentication header is sent when fetching templates\nNote: This key is sent as Authorization: token <templateApiKey> header when fetching:\nThe Mustache template file\nThe /config.json file (when useAEMMapping is enabled)\n\nThe worker will:\n\nMatch the request URL against the path patterns\nUse the regex to extract an ID from the URL\nReplace {{id}} in the endpoint URL with the extracted ID (if applicable)\nFetch JSON data from the endpoint\nIf `arrayKey` is specified, iterate through the array and use `pathKey` to filter to the right json object to be used for rendering\nRender using either the specified template or default HTML structure\nUsing Header Placeholders in Endpoints\n\nYou can dynamically construct endpoint URLs using values from request headers. This is useful when the API endpoint path depends on information passed via headers (e.g., x-content-source-location).\n\nSupported Syntax: The endpoint URL supports two placeholder patterns for header values:\n\nDot notation: {{headers.headerName}}\nBracket notation: {{headers['header-name']}}\n\nBoth patterns support case-insensitive header matching, so {{headers.Authorization}} will match an authorization header.\n\nExample Configuration:\n\n{\n  \"path\": \"/tenant/data/\",\n  \"endpoint\": \"https://api.example.com/{{headers.x-tenant-id}}/resources/{{id}}.json\",\n  \"regex\": \"/[^/]+$/\",\n  \"forwardHeaders\": [\"x-tenant-id\"]\n}\n\n\nWith an incoming request containing X-Tenant-Id: acme-corp, the endpoint resolves to: https://api.example.com/acme-corp/resources/123.json\n\nFor header names containing hyphens or special characters, use bracket notation:\n\n{\n  \"path\": \"/api/users/\",\n  \"endpoint\": \"https://backend.example.com/org/{{headers['x-org-id']}}/users/{{headers['x-user-id']}}.json\",\n  \"forwardHeaders\": [\"x-org-id\", \"x-user-id\"]\n}\n\n\nNote:\n\nRemember to include any headers you reference in the forwardHeaders array if they come from the incoming request\nHeaders defined in the headers config object are also available for substitution\nThis feature works in combination with the {{id}} placeholder from regex matching\nExample Configuration Updates\nUsing curl:\ncurl -X POST \\\n  https://json2html.adobeaem.workers.dev/config/myorg/mysite/main \\\n  -H \"Authorization: token your-admin-token-here\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '[\n      {\n        \"path\": \"/events/\",\n        \"endpoint\": \"https://api.example.com/events/{{id}}.json\",\n        \"regex\": \"/[^/]+$/\",\n        \"template\": \"/templates/event.html\",\n        \"templateApiKey\": \"your-site-token\",\n        \"headers\": {\n          \"X-API-Key\": \"your-api-key-here\"\n        },\n    \"forwardHeaders\": [\n      \"Authorization\",\n    ],\n    \"relativeURLPrefix\": \"https://some-domain.com\",    \n      },\n      {\n        \"path\": \"/path2/xyz/\",\n        \"endpoint\": \"https://<another-endpoint/path>/event-{{id}}.json\",\n        \"regex\": \"/[^/]+$/\",\n        \"template\": \"/templates/another-mustache-template.html\"\n      },\n     {\n       \"path\": \"/dynamic-pages/\",\n       \"endpoint\": \"https://www.edge-delivery-site.com/all-the-data.json\",\n       \"arrayKey\": \"data\",\n       \"pathKey\": \"URL\",\n       \"template\": \"/templates/fun-template.html\"\n     }\n    ]'\n\nExample Regex and Endpoint Combinations\nMatch numeric ID at end of path:\n\"regex\": \"/\\\\d+$/\"\n\"endpoint\": \"https://api.example.com/items/{{id}}.json\"\n\n\nMatches: /products/123, /categories/789\n\nMatch alphanumeric slug:\n\"regex\": \"/[a-zA-Z0-9-]+$/\"\n\"endpoint\": \"https://api.example.com/articles/{{id}}\"\n\n\nMatches: /blog/my-article-123, /news/breaking-news\n\nMatch date and ID pattern:\n\"regex\": \"/\\\\d{4}/\\\\d{2}/\\\\d{2}/[\\\\w-]+$/\"\n\"endpoint\": \"https://api.example.com/archive/{{id}}\"\n\n\nMatches: /posts/2023/12/25/christmas-special\n\nMatch multiple path segments:\n\"regex\": \"/([^/]+)/([^/]+)$/\"\n\"endpoint\": \"https://api.example.com/{{id}}/details.json\"\n\n\nMatches: /category/subcategory, /region/city\n\nMatch specific prefix with ID:\n\"regex\": \"/event-([\\\\w-]+)$/\"\n\"endpoint\": \"https://api.example.com/events/{{id}}/full\"\n\n\nMatches: /calendar/event-summer-2023, /schedule/event-conf-2024\n\nMatch UUID format:\n\"regex\": \"/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/\"\n\"endpoint\": \"https://api.example.com/records/{{id}}\"\n\n\nMatches: /data/550e8400-e29b-41d4-a716-446655440000\n\nUnderstanding the relativeURLPrefix Option\n\nThe relativeURLPrefix is a configuration option that allows you to rewrite relative URLs in the generated HTML content. This is particularly useful for serving static assets from a CDN or different domain while keeping content URLs relative.\n\nWhat it does: Automatically adds a custom domain prefix to relative URLs in your HTML content.\n\nBefore: <img src=\"/images/photo.jpg\">\n\nAfter: <img src=\"https://cdn.example.com/images/photo.jpg\">\n\nHow to use it: Add “relativeURLPrefix”: “https://your-cdn.com” to your config.\n\nWhat gets changed: Only URLs that start with / and end with supported file types\n\nWhat doesn't change: URLs that are already full web addresses or don't match the file type criteria.\n\nSupported File Types: The system only rewrites URLs for these file extensions:\n\n.mp4 - Video files\n.pdf - PDF documents\n.svg - SVG graphics\n.jpg / .jpeg - JPEG images\n.png - PNG images\n\nImportant Notes\n\nOnly affects URLs starting with forward slash (root-relative URLs)\nDoes not modify URLs that are already absolute or protocol-relative\nExample Template\n\nThe example template below demonstrates how to handle complex JSON structures and conditional rendering using mustache.js syntax.\n\nVariable substitution with {{variable}}\nSection blocks with {{#section}}...{{/section}}\nConditional rendering with {{#condition}}...{{/condition}}\nArray iteration\nNested object access\nHTML escaping with {{{triple-mustache}}}\nBoolean flags for conditional content\n\nThis template showcases various Mustache patterns:\n\n<!-- Basic variable substitution -->\n<title>{{title}}</title>\n\n<!-- Conditional sections -->\n{{#hasValues}}\n  <!-- Content only rendered if hasValues is true -->\n{{/hasValues}}\n\n<!-- Array iteration -->\n{{#values}}\n  <!-- Content repeated for each item in values array -->\n{{/values}}\n\n<!-- Nested object access -->\n{{values.0.valueName}}\n\n<!-- HTML content with triple mustache -->\n{{{highlights}}}\n\n<!-- Boolean flags for conditional rendering -->\n{{#hasConsensus}}\n  <!-- Content only rendered if hasConsensus is true -->\n{{/hasConsensus}}\n\nThe template uses standard Mustache syntax\nHTML content in JSON should be properly escaped\nBoolean flags control the visibility of optional sections\nThe template demonstrates handling of nested data structures\nAll variable substitutions use proper Mustache escaping\nOdds and ends\n\nSome meta tags are handled differently in Edge Delivery Services while previewing.\n\nFor example, if you want to add a bunch of article:tags, you can do the following in the mustache template:\n\n<meta name=\"tags\" content=\"{{#tags}}{{.}},{{/tags}}\">\n\nand they will be blown up into individual article:tag meta elements in the resulting HTML\n\nFAQ\n\nWhy use mustache and not handlebars or more complex templating solutions?\nAs with everything else in Edge Delivery Services, we strive to make complex interactions as simple and as fast as possible. We use a dependency-free version of Mustache which is a logic-less template syntax that keeps things as simple as possible and results in extremely fast processing times.\n\nWhat if I want to do some pre/post-processing of the source JSON / resulting HTML for my pages?\n\nFor source JSON - you can either update the source JSON into the format you need or handle it at a different edge worker that the json2html worker will use as an endpoint.\nFor resulting HTML - the HTML generated by this service will be previewed and published like any other page. You can control the resulting HTML on the page using block/site JS/CSS as you please like with any other page in Edge Delivery Services. You can also do whatever is in the confines of the mustache template which is pretty powerful in itself.\n\nHow do I develop and test for this without disturbing my production configuration?\nLike with everything else in Edge Delivery, this service is also branch-aware. That means you can push a config to a separate branch and use that branch for testing anything you would want to. Once you are satisfied with the testing, then you can push it up the main branch and update the config in the main branch as necessary.\n\nCan I use JSON2HTML with sites that have site authentication enabled?\nYes! If your Edge Delivery site uses site authentication, you'll need to provide a “templateApiKey” in your configuration. This key is used to authenticate when fetching the Mustache template and other resources (like “/config.json” when using “useAEMMapping”). Without this key, the worker won't be able to fetch templates from authenticated sites and will fall back to generating generic HTML.\n\nPrevious\n\nBYOM (Bring your own markup)\n\nUp Next\n\nPublish AEM Content Fragments with json2html","lastModified":"1768625307","labs":"AEM Sites"},{"path":"/developer/content-fragment-overlay","title":"Publishing AEM Content Fragments to Edge Delivery Services","image":"/developer/media_1b8d1abb91edbdd62a009761c83acb2c2f17596c3.png?width=1200&format=pjpg&optimize=medium","description":"AEM content fragments are used to create, manage, and deliver content across multiple channels. Until recently, publishing content fragments to Edge Delivery Services only embedded ...","content":"style\ncontent\n\nPublishing AEM Content Fragments to Edge Delivery Services\n\nAEM content fragments are used to create, manage, and deliver content across multiple channels. Until recently, publishing content fragments to Edge Delivery Services only embedded a reference to the content fragment—not the actual content—in the semantic HTML. This can limit a LLM agents’ ability to ingest and understand the content or create a meaningful search index for a page.\n\nWith the approach described next, you can publish AEM Content Fragments to Edge Delivery Services as self-contained semantic HTML.\n\nWhy Does This Matter?\n\n1. LLM/SEO Optimization\n\nLLM and SEO optimization improve because content fragments are now published as self-contained semantic HTML. Previously, only links to fragments were included, so automated agents like search engines and language models could not access the full content. Now, the complete content is directly available without requiring JavaScript.\n\n2. Omnichannel Delivery\n\nOmnichannel delivery like banners, press releases, blog posts etc. can be managed and published as full web pages instead of only as headless content. Edge Delivery Services acts as the “head” for the headless CMS, providing HTML output for any channel.\n\n3. Simplified Workflow\n\nIn the current approach, GraphQL endpoints and queries must be defined and published together with the fragments. For each content fragment model, a separate block is created and then added to an AEM page. Finally, the page itself is published, containing only a reference to the content fragment rather than the fragment’s full content.\n\nThe new approach streamlines this significantly. Instead of working with GraphQL endpoints, you define path mapping and overlay directly in the site configuration. The json2html service is then configured, and a Mustache template is applied to transform the JSON output of the fragments into HTML.\n\nWith this setup, the content fragments can be published directly as HTML, eliminating the need for additional blocks, queries or pages.\n\nHow to Set It Up\n\nAs a prerequisite you need:\n\nAn AEM as a Cloud Service (AEMaaCS) environment.\nTo turn on this feature in your AEMaaCS environment, please reach out to your Adobe team on Slack/Teams.\nA site using the aem-boilerplate-xwalk project template or similar.\nConfiguration Service enabled in Edge Delivery Services.\nEnable Content Fragments for your site.\n\nNote: While the steps below refer to a site that was set up with AEM authoring, it works for any other primary content source as well.\n\nFor our example we are going to use a content fragment model for press releases with fields like title, author, text and image.\n\nNext we create Content Fragments in a dedicated assets folder (e.g., below press) using the press release model.\n\nStep 1: Configure Path Mapping and Overlay in Configuration Service\n\nIn Configuration Service, define a path mapping for your content fragments and allow list the content fragment model to be published to Edge Delivery Service. In our example we publish Press Release fragments in /content/dam/xwalk-omnichannel/press/ to /press/.\n\ncurl --request POST \\\n  --url https://admin.hlx.page/config/{org}/sites/{site}/public.json \\\n  --header 'Content-Type: application/json' \\\n  --header 'x-auth-token: ......' \\\n  --data '{\n    \"paths\": {\n      \"mappings\": [\n        \"/content/xwalk-omnichannel/:/\",\n        \"/content/dam/xwalk-omnichannel/press/:/press/\"\n      ],\n      \"includes\": [\n        \"/content/xwalk-omnichannel/\",\n        \"/content/dam/xwalk-omnichannel/\"\n      ]\n    },\n    \"xwalk\": {\n      \"content-fragment-overlay\": {\n        \"/content/dam/xwalk-omnichannel/press/**\": {\n          \"includes\": [\n           \"/conf/xwalk-omnichannel/settings/dam/cfm/models/press-release\"\n          ]\n        }\n      }\n    }\n  }'\n\n\nNext, add an overlay to the content source in your site’s configuration, pointing it to the json2html service.\n\ncurl --request POST \\\n  --url https://admin.hlx.page/config/{org}/sites/{site}/content.json \\\n  --header 'Content-Type: application/json' \\\n  --header 'x-auth-token: ......' \\\n  --data '{\n    \"source\": {\n      \"url\": \"https://author-pXXXX-eXXXX.adobeaemcloud.com/bin/franklin.delivery/adobe-rnd/xwalk-omnichannel/main\",\n      \"type\": \"markup\",\n      \"suffix\": \".html\"\n    },\n    \"overlay\": {\n      \"url\": \"https://json2html.adobeaem.workers.dev/adobe-rnd/xwalk-omnichannel/main\",\n      \"type\": \"markup\"\n    }\n  }'\n\n\nThe URL format for the json2html service is: https://json2html.adobeaem.workers.dev/ORG/SITE/BRANCH\n\nHow the overlay works:\n\nWhen a content fragment gets published, the Admin API checks the overlay first.\nThe json2html service fetches the content fragment from AEM as JSON.\nIt transforms the JSON to HTML using a Mustache template.\nThe HTML is ingested in Edge Delivery as page by the Admin API.\nStep 2: Configure the json2html Service\n\nThe following is the curl command to set up the service for our example use case:\n\ncurl --request POST \\\n --url https://json2html.adobeaem.workers.dev/config/adobe-rnd/xwalk-omnichannel/main \\\n --header 'Authorization: token <admin-api-token>' \\\n --header 'Content-Type: application/json' \\\n --data '[\n    {\n        \"path\": \"/press/\",\n        \"endpoint\": \"https://author-pXXXX-eXXXX.adobeaemcloud.com/api/assets/xwalk-omnichannel/press/{{id}}.json\",\n        \"regex\": \"/[^/]+$/\",\n        \"template\": \"/cf-templates/press.html\",\n        \"relativeURLPrefix\": \"https://publish-pXXXX-eXXX.adobeaemcloud.com\", \n\t  \"headers\": {           \n          \"Accept\": \"application/json\"\n        },\n        \"forwardHeaders\":[\n            \"Authorization\"\n        ]\n    }\n  ]'\n\n\nSee the documentation for the json2html Service for a detailed description of the different configuration options. In short:\n\npath: Defines the URL paths for which the service should process requests. In our example the root path in Edge Delivery Service where we publish the press releases , /press/.\nendpoint: The URL of the JSON endpoint that returns the content fragment data from your AEM author instance\nregex: A regular expression to extract a specific part of the published URL (e.g., the Content Fragment ID). The extracted value is used as the {{id}} parameter in the endpoint.\ntemplate: The path to your Mustache.js template file, which transforms the JSON data into semantic HTML. Store the template in your Edge Delivery Services github project.\nrelativeURLPrefix: Converts relative URLs (for images, videos, or other assets) into absolute URLs, ensuring the admin API can ingest and render all linked assets. Set this to the base URL of your AEM publish instance.\nheaders: HTTP headers to include in the request to the JSON endpoint.\nforwardHeaders: Specifies which HTTP headers should be forwarded from the admin API to the JSON endpoint, required for authenticated requests.\nStep 3: Create a Mustache Template\n\nWithout a template, each property in your JSON is rendered as a <div>. To create a more meaningful semantic HTML for our press release example we define a template that renders:\n\nthe first image and the press release title as an hero block\nthe author name as default content\nthe news article and two images in a columns block\n<!DOCTYPE html>\n<html>\n  <head>\n    <title>{{properties.title}}</title>\n  </head>\n  <body>\n    <header></header>\n    <main>\n      <div>\n        <div class=\"hero\">\n          <div>\n            <div>\n              <p>\n                <picture>\n                  <img src=\"{{{properties.elements.image.value}}}\">\n                </picture>\n              </p>\n              <h1>{{properties.elements.newsTitle.value}}</h1>\n            </div>\n          </div>\n        </div>\n        <p><strong>Author:</strong> {{properties.elements.author.value}}</p>\n        <div class=\"columns\">\n          <div>\n            <div>\n              <p>\n                <picture>\n                  <img src=\"{{{properties.elements.image2.value}}}\">\n                </picture>\n              </p>\n            </div>\n            <div>\n              <p>{{{properties.elements.text.value}}}</p>\n            </div>\n            <div>\n              <p>\n                <picture>\n                  <img src=\"{{{properties.elements.image3.value}}}\">\n                </picture>\n              </p>\n            </div>\n          </div>\n        </div>\n      </div>\n    </main>\n    <footer></footer>\n  </body>\n</html>\n\n\nKey Points:\n\nUse Mustache.js syntax.\nEnsure all asset paths are absolute or use relativeURLPrefix to append the right domain to the relative urls in the content.\nFollow Edge Delivery Services semantic HTML guidelines to create valid semantic HTML that can be ingested by the admin API.\nTest Your Setup\nCreate and publish a content fragment to Edge Delivery Service.\nVerify the output is a standalone HTML page with all content rendered.\n\nFor our example the output looks as follows:\n\nConclusion\n\nBy setting up an overlay using the json2html service, you can now publish AEM content fragments as self-contained semantic HTML to Edge Delivery Services. This simplifies omnichannel delivery and makes your content LLM-ready—all with minimal setup.\n\nPrevious\n\njson2html Overview","lastModified":"1773390516","labs":"Sites"},{"path":"/developer/setup-google-drive","title":"Setup Google Drive as a Content Source","image":"/developer/media_1d00989ba18e942fbddc9bb108add01e153029f22.png?width=1200&format=pjpg&optimize=medium","description":"This is a follow-up step for sites that would like to use google drive (docs and spreadsheets) as a content source. It assumes that the ...","content":"style\ncontent\n\nSetup Google Drive as a Content Source\n\nThis is a follow-up step for sites that would like to use google drive (docs and spreadsheets) as a content source. It assumes that the Developer Tutorial has already been completed\n\nPrerequisites:\n\nCompleted the Developer Tutorial\nYou have a Google account.\nLink your own content source using Google Drive\n\nIn your fork of the aem-boilerplate GitHub repository, the site by default points to Document Authoring. To replicate the default content in Google Drive we recommend to use this folder for some example content.\n\n\nThis content is read-only, but it can be copied into your Google Drive folder to serve as a starting point.\n\nTo author your own content, create a folder in your own Google Drive and share the folder with the Adobe Experience Manager user (helix@adobe.com).\n\nA good way to start authoring your own content is to copy index, nav and footer from the sample content and familiarize yourself with the content structure. nav and footer are not changed frequently in a project and have a special structure. Most of the files in a project look more similar to index.\n\nOpen the files and copy/paste the entire content into corresponding files in your own Google Drive. You can also download the files via Download All or download individual files. However, remember to convert the downloaded .docx files back into native Google Docs, when you upload them to your folder in your Google Drive.\n\nNow that you have your content, you need to connect that content to your GitHub repo. You do this by changing the reference in Site Configuration in the configuration service.\n\n(see: https://www.aem.live/docs/admin.html#schema/SiteConfig for more detail)\n\nAn easy way to make this change is to use https://labs.aem.live/tools/site-admin/index.html to either create or update a site configuration.\n\nBe aware that after you make that change, you will see 404 not found errors as your content has not been previewed yet. Please refer to the next section to see how to start authoring and previewing your content. If you copied over index, nav and footer all three of those are separate documents with their own preview and publish cycles, so make sure you preview (and publish) all of them if needed.\n\nPreview and publish your content\nhttps://main--helix-website--adobe.aem.page/developer/videos/tutorial-step3.mp4\n\nAfter completing the last step, your new content source is not empty, but no content has been promoted to the preview or live stages, which means your website serves 404s.\n\nTo preview content, an author has to install the Sidekick Chrome extension. Find the Chrome extension in the Chrome Web Store.\n\nAfter adding the extension to Chrome, don’t forget to pin it, this will make it easier to find it.\n\nTo set up the Chrome extension, go to your previously shared Google Drive folder and click the extension icon in the browser toolbar and select Add this project.\n\nAs soon as the extension is installed and your project is added, you are ready to preview and publish your content from your Google Drive.\n\nSelect all three docs and activate the AEM Sidekick by clicking on your pinned extension. A new toolbar will appear. Clicking the preview or publish buttons will trigger the corresponding operation.\n\n\nOpen the index doc and make some changes. Activate the Sidekick by clicking on your pinned extension and then click the Preview button which will trigger the preview operation and open a new tab with the preview rendition of the content.\n\nPrevious\n\nBuild\n\nUp Next\n\nAnatomy of an AEM Project","lastModified":"1761159313","labs":""},{"path":"/docs/metadata","title":"Page Metadata","image":"/docs/media_1cf7bb3a1af050eff35416bc16502895c1f5a166e.jpg?width=1200&format=pjpg&optimize=medium","description":"How to author, preview and publish page metadata in AEM.","content":"style\ncontent\n\nPage Metadata\n\nMetadata is information about your page that is invisible to visitors, but important for AI agents (GEO), search engines (SEO), and social media sites that want to embed your content.\n\nTo add metadata to your page in document-based authoring, create a table like the following at the end of the document. For more information how to create a table, see the product documentation:\n\nDocument Authoring\nMicrosoft Word\nGoogle Docs\n\nStructure your table like this:\n\nThe first row of the table should just contain the word “Metadata”. This tells AEM that what follows is metadata for your document.\n\nThen create one row for each metadata property. The left column contains the name of the metadata property, the right column the value. In most cases, values are plain text, but as you can see from the “image” row above, sometimes other content can be used, too.\n\nYou’ve just created a metadata block.\n\nTo preview and publish your changes, follow the instructions in the Authoring document.\n\nBulk Metadata\n\nYou can also manage metadata for many pages at once. See the document Bulk Metadata for more information.\n\nhttps://main--helix-website--adobe.aem.page/docs/special-metadata-properties\nOmitting Metadata Values\n\nIf you want to remove a metadata value, you can simply add its property name in the left column and leave its value empty in the right column.\n\nPrevious\n\nAuthoring\n\nUp Next\n\nBulk Metadata","lastModified":"1756715653","labs":""},{"path":"/developer/operational-telemetry","title":"Developing Operational Telemetry in AEM","image":"/developer/media_1591c6c4c19f9332f0842209ec733a7cd50e9b69d.png?width=1200&format=pjpg&optimize=medium","description":"Adobe Experience Manager uses Operational Telemetry to diagnose usage and performance of web sites running on Adobe Experience Manager. As a developer, you can use ...","content":"style\ncontent\n\nDeveloping Operational Telemetry in AEM\n\nAdobe Experience Manager uses Operational Telemetry to diagnose usage and performance of web sites running on Adobe Experience Manager. As a developer, you can use the Operational Telemetry APIs to observe additional events about how your site is used.\n\nThis document describes the key concepts of strictly necessary Operational Telemetry, details the client-side APIs that you can use to send additional data to the Operational Telemetry collection service and alludes to the fact that the data can be queried.\n\nHow to add Operational Telemetry Instrumentation to your Site\n\nIf your website is not built using AEM Edge Delivery Services, it is recommended to set up Operational Telemetry in standalone mode. To set up , simply add the following script to your pages.\n\n<script defer type=\"text/javascript\" src=\"https://ot.aem.live/.rum/@adobe/helix-rum-js@^2/dist/rum-standalone.js\"></script>\n\nFor better performance, it is recommended loading the script after the Largest Contentful Paint (LCP) event\n\nKey Concepts\n\nEvery data point collected by Operational Telemetry is made up of following key parts:\n\nid – a unique identifier, generated by the Operational Telemetry library that identifies the current page view\ncheckpoint – a named event in the sequence of loading the page, and interacting with the page\nsource – identifies the DOM element that caused a particular interaction (optional)\ntarget – identifies an external resource or link that is the subject of an interaction (optional)\nCheckpoints\n\nBy convention, checkpoint names are lowercase letters, without any special characters. The most common checkpoints used in Adobe Experience Manager projects are:\n\ntop – the page loading sequence has begun and first JavaScript code is being executed by the page. This event fires even before blocks are decorated or content is visible\nloadresource – tracks what fragments and JSON API endpoints are loaded for the site and how much time they take to load.\ncwv – indicates that either the page is ready to collect Core Web Vitals (CWV) readings or that either the LCP, CLS, or FID CWV reading has been recorded. As these readings are asynchronous, multiple of these checkpoints can be passed during one page view\nlcp – the Largest Contentful Paint (LCP) has been made by the browser, this is usually the most prominent image on the page\nviewblock – a block has scrolled into view and there is a chance that the content of that block is seen. The class name of the block will be shared in the source property.\nviewmedia – an image or video has scrolled into view and there is a chance that the content of that block is seen. The URL of the image or video is shared in the target property.\nnavigate - Helps discover internal navigation paths.\nenter - Helps discover external referrers. The value direct subsumes visitors entering the URL directly into their address bar, following browser bookmarks, or opening the page from iOS applications.\nlanguage - content languages that are used and what the users choose to select as their preferred language.\nally - tracks what accessibility features are detected on the site.\nconsent - consent provider enabled on the site and how user interacts with it.\nacquisition - all the inorganic traffic sources for the site.\nredirect - the number of hops it took to reach the destination URL the user was looking for.\nclick – a click event has been triggered, not just on links or buttons. If the element clicked is a link, then the link target is recorded in the target property. The source property contains information about which element of the DOM was clicked.\nerror – a JavaScript error has occurred on the page and has not been handled by the application code. This usually indicates a bug.\n404 – a 404, page not found page has been served. This checkpoint can indicate missing content or broken links\nsearch – a site search on the page is performed, typically using a search input field\nfill - indicates that a form field was filled by the user. The source property contains the css selector of the field that was filled. The data that user entered is not captured.\nformsubmit – a form is submitted on the page. The form action is recorded in the target property, while the source property keeps information of which form of the page was submitted.\n\nPrevious\n\nOperational Telemetry Explorer","lastModified":"1763487482","labs":""},{"path":"/developer/ai-coding-agents","title":"Developing with AI Tools","image":"/developer/media_187e02a1c2db3b2b6641c969122a4e588d41b5657.png?width=1200&format=pjpg&optimize=medium","description":"AI coding agents such as Claude Code, Cursor, Codex, Gemini, GitHub Copilot, or Zed and the models they employ generally have good working knowledge of ...","content":"style\ncontent\n\nDeveloping with AI Tools\n\nAI coding agents such as Claude Code, Cursor, Codex, Gemini, GitHub Copilot, or Zed and the models they employ generally have good working knowledge of the core technologies that power AEM. As we use semantic HTML, Vanilla JavaScript, and framework-less CSS, we benefit from the largest possible training set.\n\nHowever, agents start every session from scratch. They don't know your project's conventions, block patterns, and workflows. Without guidance, they'll make reasonable guesses that miss the mark such as wrong DOM structures, skipped verification steps, and content models that don't follow established patterns.\n\nThis guide covers ways you can enhance that experience to get the most out of these tools and make agentic development with AEM as productive as possible.\n\nSetting Up Your Project\nEstablishing Context\n\nThe AEM Boilerplate ships with an AGENTS.md that covers project structure, setup commands, coding conventions, and block development patterns. If your project is based on a recent version of the boilerplate, you already have one. If not, use it as a starting point. In either case, don’t be shy about adjusting it to fit your specific projects’ conventions and patterns. Note that Claude Code uses CLAUDE.md for the same purpose, so you may want both for maximum compatibility.\n\nWe publish an llms.txt file which provides an overview of AEM and links to key documentation in a format optimized for AI consumption. Most agents don't refer to it proactively, but mentioning this file in a rules file or your prompt allows the agent to quickly include key documentation and APIs in its context to produce better results.\n\nHelping the Agents Search\n\nModern coding agents have access to web search, but they will often mix up the Java/JCR/OSGi-based technology stack for AEM Sites as a Cloud Service with the lightweight HTML/CSS/JavaScript stack of Edge Delivery Services in Adobe Experience Manager Sites as a Cloud Service when you ask about AEM. When asking about Edge Delivery Services, you often get information about Cloudflare's, Akamai's or Fastly's Edge Compute and CDN infrastructure, and searching for EDS will bring up medical information.\n\nThe best way to prevent that is to specify \"search the www.aem.live website\". Many tools also allow you to index custom documentation, in which case you should add https://www.aem.live/docs/.\n\nWithout Context\n> Help me find information about EDS.\n\n⏺ I'll help you find information about EDS. Let me search for relevant\n  information.\n\n⏺ Web Search(\"EDS Ehlers-Danlos Syndrome medical condition information\")\n\n╭─────────────────────────────────────────────────────────────────────────────────╮\n│ Tool use                                                                        │\n│                                                                                 │\n│   Web Search(\"EDS Ehlers-Danlos Syndrome medical condition information\")        │\n│   Claude wants to search the web for: EDS Ehlers-Danlos Syndrome medical        │\n│   condition information                                                         │\n│                                                                                 │\n│ Do you want to proceed?                                                         │\n│ ❯ 1. Yes                                                                        │\n│  2. Yes, and don't ask again for Web Search commands in                         │\n│  /private/tmp/new-project                                                       │\n│   3. No, and tell Claude what to do differently (esc)                           │\n│                                                                                 │\n╰─────────────────────────────────────────────────────────────────────────────────╯\n\nWith Context\n> Help me find information about Edge Delivery. Search www.aem.live\n\n⏺ I'll help you search for information about Edge Delivery on\n  http://www.aem.live.\n\n⏺ Fetch(https://www.aem.live)\n  ⎿  Fetching…\n\n╭─────────────────────────────────────────────────────────────────────────────────╮\n│ Fetch                                                                           │\n│                                                                                 │\n│   https://www.aem.live                                                          │\n│   Claude wants to fetch content from www.aem.live                               │\n│                                                                                 │\n│ Do you want to allow Claude to fetch this content?                              │\n│ ❯ 1. Yes                                                                        │\n│   2. Yes, and don't ask again for www.aem.live                                  │\n│   3. No, and tell Claude what to do differently (esc)                           │\n│                                                                                 │\n╰─────────────────────────────────────────────────────────────────────────────────╯\n\nGiving Agents Skills\n\nSkills are specialized, reusable workflows that help your AI coding agent perform complex, multi-step tasks more effectively. Think of them as mini-playbooks that bundle together instructions, reference materials, and scripts to handle specific scenarios.\n\nSkills use progressive disclosure to load detailed instructions only when needed, preserving your agent's context window for what matters.\n\nSkills follow an open standard supported by a large and growing number of agents.\n\nAEM Edge Delivery Skills\n\nWe maintain a set of skills for AEM Edge Delivery development that help agents tackle development, testing, and migration tasks according to AEM best practices. The skills are organized around two orchestration skills that coordinate complete workflows, supported by specialized sub-skills and standalone research skills.\n\nContent Driven Development\n\nThe content-driven-development skill orchestrates the complete development workflow for building or modifying blocks. It codifies AEM's content-first philosophy by requiring test content before code. Use it for all code changes — new blocks, block modifications, CSS styling, bug fixes, or any JavaScript/CSS work that needs validation. It coordinates sub-skills for analysis and planning, content modeling, block implementation, testing, and code review.\n\nPage Import\n\nThe page-import skill orchestrates importing or migrating pages from existing websites into AEM Edge Delivery content. It coordinates sub-skills for scraping, structure analysis, authoring decisions, HTML generation, and local preview.\n\nResearch Skills\n\nStandalone skills that help agents find information, references, or understand what's available.\n\nSkill\nPurpose\ndocs-search\nSearch aem.live documentation and blogs for platform features, implementation guidance, and best practices\nblock-collection-and-party\nFind reference implementations from the Block Collection and Block Party repos\nblock-inventory\nSurvey available blocks from the local project and Block Collection to understand what's already built\nfind-test-content\nSearch for existing pages containing a specific block to identify test content\nAdding Skills to Your Project\n\nThe easiest way to add skills is using gh-upskill\n\n# Install gh-upskill as a GitHub CLI extension\ngh extension install ai-ecoverse/gh-upskill\n\n# Add AEM skills to your project (run from project root)\ngh upskill adobe/skills --path skills/aem/edge-delivery-services --all\n\n\nThis adds the skills we use to your project's .skills/ directory. Most agents that support the Agent skills standard will discover and use them automatically, but consult specific product documentation for details, and some agents require skills in a specialized directory.\n\nUsing Skills\n\nOnce added, skills are automatically usable. Just prompt as you normally would and your agent should apply the skills where appropriate. General prompt engineering guidance still applies, so you should, for example, mention a specific skill when you know you want the agent to use it.\n\nPro-tip: If you notice your agent going off-course, or not applying the skills correctly, you can ask the agent to update the skills to avoid repeating the same mistakes.\n\nSample prompts\nBuild an embed block for instagram urls\nUpdate the CSS of the hero block to match this style (add screenshot)\nHelping the Agents See\n\nIn many cases, your coding agent will make assumptions about what your site looks like without having the ability to actually verify this. You can help the agent in multiple ways:\n\nBy taking screenshots of the page rendered on localhost:3000 and adding them to the chat\nBy installing browser automation tools like playwright-cli or agent-browser that give agents direct control of a browser to navigate, interact with, and screenshot your site\nBy installing specialized Model Context Protocol servers (see below)\nRequest Reviews from AI\n\nThe AEM development flow is centered around GitHub and Pull Requests. This makes it an excellent match for AI code reviews that work both for humans and AI-generated pull requests (don't worry, AI code reviewers don't give other AIs a free LGTM-pass).\n\nGitHub Copilot Code Reviews do not require any setup\nClaude Code Pull Request Reviews are based on GitHub Actions\nOpenAI Codex Code Review offers pull request reviews through a dedicated GitHub app\nUseful Tools for AI Agents\n\nThe best AI agents have received massive reinforcement learning on using a set of command line tools that can speed up their development workflow, but which you often don't have pre-installed on your system. Consider installing them, so that your AI agent has more capable tools at their disposal.\n\nripgrep (rg) - Fast text search across codebase\njq - JSON processor and transformer\ngh - GitHub CLI\ncurl - Command-line HTTP requests\nast-grep (sg) - Syntax-aware code search and transformation\nhttpie - Human-friendly HTTP client\nfzf - Fuzzy finder for interactive file/command selection\nfd - Fast and user-friendly alternative to find\nbat - Syntax-highlighted cat replacement with Git integration\n\nAsk your AI agent if they are installed on your machine and tell them to install if not.\n\nModel Context Protocol\n\nThe Model Context Protocol (MCP) provides extensibility for AI agents and almost all AI agents support it. You can try these MCP tools that work well with AEM. Remember that MCP tools will fill up the available context window, so you should disable tools that you do not use.\n\n> /context\n  ⎿\n     ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛀ ⛀ ⛀\n     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   Context Usage\n     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   claude-sonnet-4-20250514 • 17k/200k tokens (8%)\n     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶\n     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ System prompt: 3.0k tokens (1.5%)\n     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ System tools: 11.4k tokens (5.7%)\n     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ MCP tools: 1.3k tokens (0.7%)\n     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ Memory files: 705 tokens (0.4%)\n     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ Messages: 91 tokens (0.0%)\n     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛶ Free space: 183.4k (91.7%)\n\n     MCP tools · /mcp\n     └ mcp__context7__resolve-library-id (context7): 691 tokens\n     └ mcp__context7__get-library-docs (context7): 652 tokens\n\n     Memory files · /memory\n     └ User (/Users/trieloff/.claude/CLAUDE.md): 705 tokens\n\nContext7\n\nThe Context7 MCP Server gives your agents access to hundreds of sites of indexed API documentation including for Adobe Experience Manager.\n\nHelix MCP Server\n\nThe (unofficial) Helix MCP Server provides tools and prompts to make your agentic development with AEM easier including docs search, block starters, and administrative tools. Please see the Readme for information on tools and usage.\n\nDA MCP Server\n\nThe (unofficial) Document Authoring MCP Server provides tools and prompts to make your agentic content creation and management development with DA easier. Please see the docs for information on tools and usage.\n\nBrowser MCP\n\nBrowser MCP is a Chrome extension, allowing your AI coding agent to remote-control your web browser and take screenshots.\n\nAEM Experience Modernization Agent\n\nIf you want to migrate your site the fastest way Adobe offers, use the AEM Modernization Agent. The Experience Modernization agent combines site creation and migration skills for initial website onboarding and block development capabilities for continuous experience development (style updates, template refinements, landing page creation). In addition, it offers the Experience Modernization Console as a hosted AI-assisted development environment available to you directly.\n\nQuestions or Want to Share?\n\nAI tools for AEM development are evolving fast. Join our community to get help with specific challenges and questions, share effective prompts, tools, and workflows, and connect with other developers using AI and AEM. For a detailed walkthrough of coding an AEM block with Windsurf, read Frank Townsend's blog post over at Arbory Digital.\n\nFind us on Discord, Slack, or Teams.","lastModified":"1773681491","labs":"Early-Access Technology: AI coding agents are a rapidly evolving technology, so you don't need to ask us before using, but we are still interested in hearing about your experiences"},{"path":"/developer/ue-trial","title":"Accelerate your tutorial with a pre-built environment","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Experience AEM’s Universal Editor in a fully configured environment designed to help you start building quickly and efficiently. Explore powerful AEM features, edit content in real time, and see how you can streamline your development workflow from the very first click.","content":"style\ncontent\nAccelerate your tutorial with a pre-built environment\n\nTo streamline your experience with the AEM Sites tutorial, you can request here a fully configured, ready-to-use environment. This eliminates the need for manual setup, allowing you to focus directly on implementation and exploration.\n\nSign up to get an instant tutorial environment in seconds!\n\nLearn in the tutorial how you can:\n\nBuild and deploy a website with Edge Delivery Services\nUse AEM to manage your websites content\nEdit content using Universal Editor with its intuitive in-context WYSIWIG mode\n\nThe tutorial environment will be deleted after 30 days\n\nBy clicking on \"Continue\", I agree that:\n\nThe Adobe family of companies may keep me informed with personalized calls about products and services.\n\nSee our Privacy Policy for more details or to opt-out at any time.\n\nI have read and accepted the Terms of Use.\n\nFrescopa\n\nA fictitious demo brand that sells coffee, related appliances & accessories online & in their physical locations\n\nboilerplate-frescopa\nCommerce Boilerplate\n\nA boilerplate for Edge Delivery Services with Universal Editor that integrate with Adobe Commerce\n\nboilerplate-xcom\nBoilerplate\n\nA boilerplate for Edge Delivery Services with Universal Editor as a starting point for new projects\n\nboilerplate-xwalk","lastModified":"1760529674","labs":""},{"path":"/docs/recurring","title":"Schedule recurring tasks","image":"/docs/media_1c7b0ed48ddf9d19abb4a4d6e5f5494a878ce54b7.png?width=1200&format=pjpg&optimize=medium","description":"If you want to schedule recurring tasks, one option is to use GitHub Actions. In the following example, we will create an action that publishes ...","content":"Schedule recurring tasks\n\nIf you want to schedule recurring tasks, one option is to use GitHub Actions. In the following example, we will create an action that publishes an index every hour.\n\nIn your GitHub repo, create a new action and enter the following definition:\n\nname: Publish Index\non:\n  workflow_dispatch:\n  schedule:\n    - cron: \"55 * * * *\"\n\njobs:\n  publish-index:\n    runs-on: ubuntu-latest\n    steps:\n    - name: Invoke admin\n      uses: fjogeleit/http-request-action@v1\n      with:\n        url: 'https://admin.hlx.page/index/org/site/main/query-index.json'\n        method: 'POST'\n        timeout: 30000\n        customHeaders: '{\"Authorization\":\"token ${{ secrets.ADMIN_API_KEY }}\"}'\n\n\nEvery hour, 5 minutes before the hour, the action is invoked and will publish the index for project org/site. We add an authorization header containing an admin API key that we create ourselves and store in the repository’s action secrets as ADMIN_API_KEY.\n\nstyle\ncontent","lastModified":"1758809250","labs":"AEM Sites"},{"path":"/docs/media","title":"Media Assets","image":"/default-social.png?width=1200&format=pjpg&optimize=medium","description":"Take a closer look at how media assets like images, videos and uploaded files are managed in AEM.","content":"style\ncontent\nMedia Assets\n\n\nIn every website, media assets like images or videos play an important role to transport information or provide an engaging experience to visitors.\n\nDedicated Delivery Infrastructure (Media Bus)\n\n\nAs image and video delivery is crucial to the performance a website, AEM has a built-in facility named Media Bus, that makes sure that the media delivery is fast (same-origin), flexible (dynamic image rendering), reliable (built on a dual stack architecture), immutable (cache-optimized) and simple to setup as part of the origin (no special CDN rules, single AEM origin).\n\nAs long as they adhere to the documented limits, the Media Bus facility is used for images that are copy/pasted into a document, or images and videos uploaded as separate files. To avoid duplication of media binaries, the Media Bus is using Content Addressable Storage internally, meaning that every asset is only stored once with a unique hash, visible in the URL following the media_ prefix. This system also allows AEM to cache assets permanently, and makes it impossible to guess or discover a media filename via brute force.\n\nImages\n\n\nImages that are copy/pasted into a document are deduplicated and added to the Media Bus when the document is previewed. Images that are uploaded as separate files are also added to the Media Bus and a 301 redirect will be added for the original filename of the asset. Original image sizes (for correct aspect ratio) are provided in the <img> tag via height and width attributes to provide the proper aspect ratio to the browser. Lazy loading is enabled by default for performance reasons.\n\nDynamic Image Manipulation\n\nThe query parameters height, width, format and quality are supported. By default, an image is rendered as a <picture> tag with <source> and <img> children, providing a 750px version for mobile and a 2000px version for desktop in both webp and original png or jpeg renditions as fallback.\n\nFilenames\n\nIt may be useful to add filenames to the files that are different from the hash (for instance, a simplified version of the alt text) to provide a better \"Save as…\" experience in the browser. There are some sources that claim there is SEO value as well in providing named image resources.\n\nTo create your custom image source, you can just modify the URL to insert the filename between the media_<hash> and file extension: ./media_<hash>/<filename>.<extension>\nFor example: https://www.aem.live/media_1645e7a92e9f8448d45e8b999afa71315cc52690b/hero-collage.png?width=2000&format=webply&optimize=medium, will use hero-collage.png as a filename when using the browser's save dialog.\n\nText alternatives (alt)\n\nText alternatives should only be supplied if an image is informative or functional. For decorative images a null (empty) text alternative should be provided (alt=\"\") so that they can be ignored by assistive technologies such as screen readers. Text alternatives for decorative images would add audible clutter to screen reader output or could distract users from the adjacent text.\n\n\nEqually important, the context of an informative image matters, meaning that the same image used in multiple contexts likely needs different text alternatives for them to be useful, so it is more sensible to store the value with the reference as opposed to storing it with the asset.\n\nCentralized Asset Management and Delivery\n\nIn certain situations it may be desirable to have a dedicated centralized asset management infrastructure, for example a Digital Asset Management System (DAM) like AEM Assets, to enforce curation and reuse policies and processes, as well as advanced image manipulation.\n\nThere are generally two ways to integrate a centralized DAM, which are differing in a couple of characteristics.\n\nApproach A: Built-in Media Bus Delivery\n\nWhen assets are managed centrally, a browsing interface (e.g. asset picker) is provided that allows the author of a page to copy a reference (URL to the DAM) or the actual image (pixels) into their document as an image. The actual implementation on the image being referenced by URL or a copy being created when the image is inserted into the document largely depends on the setup of the content source.\nWord and Google Docs only support images that are contained in the document itself and therefore will create a copy of the image that's selected from the DAM.\nDocument Authoring offers both, keeping a reference to a URL to the document as well as making a copy of the image. Most Bring your own Markup setups work by reference.\n\nEither upon paste or upon preview, the image is requested from the DAM and is added to the Media Bus, and delivered natively through the same origin, preserving the built-in performance, reliability and simplicity of the CDN setup. Dynamic or advanced image manipulation (e.g. smart crop, auto-tagging, etc.) is limited to the point of ingestion (meaning at paste or preview time).\n\nThis approach is applicable in situations where simplicity and performance are key, and advanced image manipulations can be applied as part of the authoring flow. This approach also assumes a connected lifecycle of content (page or fragment) with the images that are in use, and a change of an image in the DAM requires an update of the preview and republishing of the containing pages.\n\nApproach B: Asset Management Delivery\n\nSimilar to Approach A, a browsing interface (e.g. asset picker) is provided that allows the author to select the image to use. But instead of inserting it as an image into the document, it is managed as an external link that is directly pointing to the DAM's CDN.\nContent sources such as Document Authoring or Universal Editor still may display the image to the author, based on the known URL structure (e.g. hostname or URL prefix) of the links to the DAM, but from a content modeling perspective it is seen as a lin, to make sure that the content structure stays compatible across all supported content sources.\n\nIn the browser, the linked URLs to the assets are rewritten from <a> to <picture> and <img> tags, depending on the capabilities of the asset delivery system. This happens very early on in the page decoration process in scripts.js and has no negative performance impact by itself.\n\nTo make sure that the delivery performance in production is kept up to par, the CDN setup of the production domain needs to include a special route to the DAM's own CDN to make sure that the LCP image can be delivered from the same origin, including cache settings, invalidation (if needed) as well as careful consideration for compression and dynamic image sizes.\n\nThis approach is useful for situations where images are dynamically manipulated based on the context of the site visitor, beyond different images sizes, formats and compression, and advanced manipulations provided by the DAM cannot be applied at authoring time. This approach assumes a disconnected lifecycle between images and content, and changes to an asset in the DAM are immediately reflected on the site.\n\nVideos\n\nShort videos can be uploaded as MP4 files to the Media Bus to allow for performance-focused, same-origin delivery of hero videos above the fold. For long-form videos we recommend to use a DAM (like AEM Assets) or a video streaming service (like YouTube or Vimeo).\n\nFor the author, the reference to a video is a link in a document (similar to Approach B for centrally managed images). To be able to replace the media without changing the references, it is commonly recommended to use the human-readable redirect URL to the MP4 file as obtained during preview as opposed to the media_<hash> URL.\n\nIt is recommended to also include a poster image that can be loaded quickly and can be used as a fallback if the autoplay feature of an ambient video is disabled (e.g. Mobile Safari in \"Low Power Mode\").\n\nPDF & SVG\n\nPDF and SVG files are both supported, but are delivered as part of the content (not via Media Bus) which means that they are following the regular preview and publish lifecycle and are not stored in a de-duplicated, content addressed fashion.","lastModified":"1762973893","labs":""},{"path":"/docs/cdn-guide","title":"Picking the right CDN","image":"/docs/media_141d72765656534440a69d2bf6e223feef96def5d.png?width=1200&format=pjpg&optimize=medium","description":"A Content Delivery Network makes sure that your visitors get your site served as fast as possible when entering your domain name. Your Adobe Experience ...","content":"style\ncontent\nPicking the right CDN\n\nA Content Delivery Network makes sure that your visitors get your site served as fast as possible when entering your domain name. Your Adobe Experience Manager license includes a CDN and Adobe Experience Manager integrates with many popular CDN choices, but if you are unsure which one to pick, answer the questions below\n\nQuestionnaire\nRubric\nhttps://main--helix-website--adobe.aem.page/tools/decisions.json?sheet=cdn\nFastly (managed by Adobe Experience Manager)\n\nAdobe Experience Manager as a Cloud Service bundles the Fastly CDN, which means it's included in your contract. Advanced configuration options require using configuration pipelines in Cloud Manager, so you combine the best of YAML with VCL.\n\nLearn how to set up Adobe Managed CDN\n\nCloudflare (Enterprise)\n\nAEM customers use Cloudflare Enterprise and benefit from fast and fine-granular purging, powerful extensibility with Cloudflare Workers, support for multiple backends, and zero-trust security support. It's also an all-around good CDN.\n\nLearn how to set up Cloudflare for Adobe Experience Manager\n\nCloudflare (Free)\n\nCloudflare's free option is a great choice for smaller sites, as the free tier comes with included traffic and support for Cloudflare Workers. It does not support key-based purging, which means for some updates Adobe Experience Manager has to purge the entire site, making Cloudflare free unsuitable for large sites with frequent content updates.\n\nLearn how to set up Cloudflare for Adobe Experience Manager\n\nCloudfront\n\nCloudfront is the CDN that is included in your Amazon Web Services contract. It is a passable CDN that is supported by Adobe Experience Manager, but many important features are missing such as support for fine-granular content purges. This makes Cloudfront unsuitable for sites with lots of content and a high frequency of updates.\n\nIf this does not discourage you, here is how to set up Cloudfront for Adobe Experience Manager\n\nFastly (self-managed)\n\nIf you are already a Fastly customer, then you know why Adobe Experience Manager integrates with it out of the box: a rock-solid and blazingly fast CDN with fine-grained push invalidation that you can configure using Fastly's VCL language.\n\nLearn how to set up Fastly for Adobe Experience Manager\n\nFastly (managed by Adobe Commerce)\n\nAs an Adobe Commerce customer, you have access to the bundled Fastly CDN. It's a great choice of CDN, rock-solid, blazingly fast, and, thanks to VCL, extremely configurable with plenty of public documentation.\n\nLearn how to set up Fastly for Adobe Experience Manager\n\nAkamai\n\nAkamai is the market-leading CDN and if you are a customer, there is no reason not to use it with Adobe Experience Manager. Akamai supports push invalidation at reasonable speed and can combine sites with multiple backends, which helps when migrating a legacy site to fast edge delivery.\n\nSet up Akamai for Adobe Experience Manager in 13 easy steps.","lastModified":"1759522837","labs":""},{"path":"/docs/operations","title":"Operations","image":"/docs/media_19a8c8f2c6f4390121c76bebd689e460832ebbc40.png?width=1200&format=pjpg&optimize=medium","description":"How we operate Edge Delivery Services in AEM.","content":"style\ncontent\n\nOperations\nCode Deployments\nService Code\n\nWe release small-increment (atomic) updates to our multi-tenant services on a constant basis, following the continuous delivery practice called Scaled Trunk-based Development. On average, there are roughly 80-100 releases per month. After ensuring 100% test coverage and a (human) code review, service updates are automatically rolled out to all customers simultaneously. The majority of them are dependency updates which typically go completely unnoticed, or fixes for issues some or all customers were having. Features or enhancements can come feature-flagged for single customers or projects, meaning they will only be activated on an opt-in basis via configuration.\n\nCustomer Code\n\nUnlike traditional AEM as a Cloud Service, there is no CI/CD pipeline for Edge Delivery Service. Code changes are picked up directly from the branches in your GitHub or bring your own git repository. Within seconds, every branch is automatically published under its own distinct URL for testing and staging or changes. Production code is typically served from the main branch.\n\nWe strongly recommend project developers follow the same Scaled Trunk-based Development model as we do for our services. This ensures you merge small pull requests into production often, but the quality assurance & review efforts are limited to small change sets. Nobody wants to review and test large pull requests, and long-lived branches with lots of changes tend to be difficult (and dangerous) to merge. For more details, please read our developer best practices.\n\nService Level Objectives\n\nThanks to our unique multi-cloud architecture, our services are designed with the highest resilience and availability in mind. But of course this doesn't fully protect us from the odd service outage. We are aiming for these service level objectives and have historically exceeded them:\n\nDelivery Service SLO\n99.99%\nPublishing Service SLO\n99.9%\n\nYour Service Level Agreement (SLA) with Adobe covers the exact availability assurances we offer, depending on your contract details. The relevant status reports for that can be found on status.adobe.com.\n\nObservability\nLogging\n\nAll our technical services feed into a centralized, and redundant SIEM system, powered by Coralogix and Splunk. Based on this, alerts for critical levels of errors are set based on thresholds that are constantly updated based on the evolution of the services. Our observability infrastructure is operated independently from the operational infrastructure.\n\nOur logging is focussed on perimeter logging, so that the operational properties of the individual services are prioritized over internal state. Depending on the nature of the collected logs, different retention policies are applied, ranging from two weeks to 25 months.\n\nMonitoring\n\nOur observability setup consists of a fine-grained set of highly sensitive synthetic monitors and log-based alerts for all our technical services. The slightest anomaly, be it related to a change we made ourselves or an issue with one of our 3rd party vendors, immediately alerts our on-call rotation.\n\nWe also have extra synthetic monitoring in place for our top 10 customer sites by traffic, which gives us the confidence that our delivery service is performing and scaling as intended.\n\nIncident Management\n\nOur operations team is assigned to a 24/7 on-call rotation split into two 12 hour shifts on two continents. Adobe On-Call ensures prompt notification of on-call engineers on several channels including phone calls, text messages and push notifications. We vow to acknowledge every incident within 15 minutes, although in reality we are typically a lot quicker.\n\nWe maintain detailed runbooks for each type of incident to ensure we can restore the affected service as fast as possible. Our process includes root cause analysis (RCA) and we publish postmortems for every single incident, no matter how small the customer impact was.\n\nDisaster Recovery\n\nAdobe maintains detailed disaster recovery plans for all business services and regularly conducts disaster recovery tests to validate that both delivery and API services can be restored well within their respective intended RTOs.\n\nPublishing\n\nThe Admin API is currently a single-cloud deployment. In case of a disastrous outage in this service, the intended RTO is 12 hours.\n\nDelivery\n\nThe Content Hub is where all published content, media, and code is stored. For this tier, we rely on active/active replication in a multi-cloud setup. Unlike traditional approaches to disaster recovery like active/standby or multi-region deployments, an active/active multi-cloud setup ensures that any published content, media, or code is stored redundantly in at least two different cloud providers with different, but functionally identical software stacks.\n\nIn case of an outage, even a global outage of the first cloud provider's control plane that would affect a multi-region setup, all content is still available in the second cloud provider and operations can resume without data loss.\n\nThe active/active deployment means that during normal operations, the workloads are split roughly equally between our cloud providers and only in case of an outage of one, the remaining providers will pick up the load.\n\nThe delivery service itself is also deployed redundantly, so in case of an outage at one cloud provider, Adobe can plan to switch to the other and resume delivery near-instantaneously. The intended recovery time objective (RTO) for this service is 15 minutes.\n\nPrevious\n\nSecurity\n\nUp Next\n\nPeak-Traffic Events","lastModified":"1771843223","labs":""}],"columns":["path","title","image","description","content","lastModified","labs"],":type":"sheet"}