Key Elements Of Technical SEO For Large Companies


Working with large organizations to improve their technical SEO is, in my opinion, the best and most enjoyable time to practice technical skills.

More often than not, you’re faced with complex systems and infrastructures, a host of legacy issues, and different teams responsible for different sections of the website.

This means you need to work with a number of teams and prove business cases, including “the why,” to multiple stakeholders to enact change.

For this, you need strong technical SEO knowledge, but you also need the ability to make multiple people (and teams) care about why something is an issue, and the reasons why they should be invested in fixing it.

Juggling complex technical issues and maintaining communications with multiple stakeholders, ranging from C-level through to brand, product, and engineering teams (in addition to your direct contacts), can be an overwhelming experience.

But it also provides great experience and allows you to develop key technical SEO skills outside of checklists and best practices. These are valuable experiences you can then apply to the run-of-the-mill technical projects.

Issue Communication At Scale

Enterprise brands have large teams, and you’ll need to coordinate and work with multiple teams to get things done.

Some companies have these teams operating as one beat, with known overlaps and free-flowing communications.

Others operate teams in silos, with the website (or websites) and/or regions being carved up into different teams. This can make it more challenging to show results in the more “traditional” way and can make getting buy-in for website-wide technical issues to be resolved more challenging.

Each team within the business has its own set of priorities – and often its own key performance indicators (KPIs).

While the marketing teams may be broken up, engineering teams are usually a single resource in the business, so you’re competing against the other marketing teams, brands, and products.

This means you not only need to make sure your main point of contact cares about the issue but also communicate to the wider teams how resolving the issue is also in their best interests.

The way to do this is through effective, multi-department reporting.

This doesn’t mean producing one big report for all departments to pick and choose what they look at, but using the data available to you to create multiple reports that are simple, clean, and digestible that communicate to each stakeholder group the metrics that matter to them and influence their ability to be successful.

These can be as simple as Looker Studio reports or, if you’re API savvy, your own reporting dashboards.

Standard Operating Procedures (SOPs)

SOPs allow you to create a framework with the client to set benchmarks in consistency and scalability and document key changes, decisions, and implementations made.

Creating a knowledge center to document key changes is common practice, even outside of an enterprise, but developing SOPs that are reviewed and revised regularly goes one step further.

This also helps the client in onboarding new team members to bring them up to speed and smooth that process. It also provides frameworks for other client teams, reducing the risk or potential of them not adhering to an agreed best practice for the brand or experimenting with something they’ve read on a random blog or something that has been suggested by a large language model (LLM).

You can develop SOPs for all manner of scenarios, but from experience, there are three common SOPs that cover a range of basics and mitigate potential “SEO risk” from a technical SEO perspective:

  • Internal linking.
  • Image optimization.
  • URL structures.

Internal Linking

Internal links are crucial for SEO. Every content piece, except for landing pages, should include internal links where relevant. A simple SOP for this could be:

  • Avoid using non-descriptive anchor text, such as “here” or “this article,” and provide some context as to the page being linked to.
  • Avoid internal links without context, such as automating the first or second instance of a word or phrase on each page to point to one specific page.
  • Use Ahrefs’ Internal Link Opportunities tool or Google search (site:[yourdomain.com] “keyword”) to find linking opportunities.

Image Optimization

Many overlook image SEO, but optimizing images can improve page load speeds – and, if important to you, improve your visibility within image search. A good SOP should include:

  • Using descriptive file names, and not keyword stuffing them.
  • Writing alt text that accurately describes the image for accessibility, and not including sales messaging within them.
  • Choosing the right file format and compressing images to improve load speed.

URL Structures

Ensure URLs are optimized for search engines and users by making them clear, concise, and keyword-relevant. The SOP should cover:

  • Removing unnecessary stop words, punctuation, and white spaces (20%).
  • Using hyphens instead of underscores.
  • Not keyword stuffing the URLs.
  • Using parameters that don’t override the source or trigger a new session within Google Analytics 4.

Technical Auditing Nuances

One of the more complex elements of performing a technical audit on any enterprise website with a large number of URLs is crawling.

There a number of ways you can tackle enterprise website crawling, but two common nuances I come across are the need to perform routine sample crawls, or tackling the crawl of a multi-stack domain.

Sample Crawling

Sample crawling is an efficient way to diagnose large-scale SEO issues without the overhead of a full crawl.

By using strategic sampling methods, prioritizing key sections, and leveraging log data, you can gain actionable insights while preserving crawl efficiency.

Your sample should be large enough to reflect the site’s structure but small enough to be efficient.

I typically work to the following guidelines for the size of the website or the size of the subdomain or subfolder.

 Size Number of URLsSample Size
 Small  Crawl all or 90%+ of the URLs.
 Medium 10,000 to 500,000 10% to 25%, depending on which end of the spectrum your number of URLs falls.
 Large >500,000A 1-5% sample, focusing on key sections.

You also want to choose your samples strategically, especially when your number of URLs enters hundreds of thousands or millions. There are four main types of sampling:

  • Random Sampling: Select URLs randomly to get an unbiased overview of site health.
  • Stratified Sampling: Divide the site into key sections (e.g., product pages, blog, category pages) and sample from each to ensure balanced insights.
  • Priority Sampling: Focus on high-value pages such as top-converting URLs, high-traffic sections, and newly published content.
  • Structural Sampling: Crawl the site based on the internal linking hierarchy, starting with the homepage and main category pages.

Crawling Multi-Stack Websites

Crawling websites built on multiple stacks requires a strategy that accounts for different rendering methods, URL structures, and potential roadblocks like JavaScript execution and authentication.

This also means you can’t just crawl the website in its entirety and make broad, sweeping recommendations for the “whole website.”

The following is a very top-line checklist that you should follow, and it covers a lot of the key areas and “bases” that you may encounter:

  1. Identify and map out which parts of the site are server-rendered vs. client-rendered.
  2. Determine which areas require authentication, such as user areas.
  3. If sections require login (e.g., product app), use session cookies or token-based authentication in Playwright/Puppeteer.
  4. Set crawl delays if rate-limiting exists.
  5. Check for lazy-loaded content (scrolling or clicking).
  6. Check if public API endpoints offer easier data extraction.

A good example of this is a website I worked on for a number of years. It had a complex stack that required different crawling methods to crawl and identify issues at a meaningful scale.

 Stack ComponentApproach 
NuxtIf using SSR or SSG, standard crawling works. If using client-side hydration, enable JavaScript rendering.
GhostTypically SSR, so a normal crawl should work. If using its API, consider pulling structured data for better insights.
 AngularNeeds JavaScript rendering. Tools like Puppeteer or Playwright help fetch content dynamically. Handle infinite scrolling or lazy loading carefully.
 ZendeskZendesk often has bot restrictions. Check for API access, or RSS feeds for help center articles.

The above are extreme approaches to crawling. If your crawling tool allows you to render webpages and avoid using tools like Puppeteer to fetch content, you should do so.

Final Thought

Working on technical SEO for large organizations presents unique challenges, but it also offers some of the most rewarding experiences and learning opportunities that you can’t find elsewhere – and not all SEO professionals are fortunate enough to experience.

Making a lot of the “day-to-day” more manageable – and gaining buy-in from as many client stakeholders as possible – can lead to a better client-agency relationship, and lay the foundations for strong SEO campaigns.

More Resources:


Featured Image: Sammby/Shutterstock



Source link

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

We Know You Better!
Subscribe To Our Newsletter
Be the first to get latest updates and
exclusive content straight to your email inbox.
Yes, I want to receive updates
No Thanks!
close-link

Subscribe to our newsletter

Sign-up to get the latest marketing tips straight to your inbox.
SUBSCRIBE!
Give it a try, you can unsubscribe anytime.