100% Free Updated SEMrush Technical SEO Certification Exam Questions & Answers.
Enjoy the Technical SEO course. Put what you’ve learned to the test with this free exam.
Download SEMrush Technical SEO Exam Answers – PDF
SEMrush Technical SEO Certification Assessment Details:
- Questions: 34 questions
- Time limit: 40 minutes to complete the assessment
- Pass rate: 70% or higher to pass
- Retake period: If you don’t pass the assessment, You can immediately retake the exam.
- Validity Period: 12 Months
🛒 Hire us: It very hard to take an exam in the middle of your busy schedule. That’s why we are here. If you don’t have enough time, then hire us. We will do all kinds of exams on behalf of you. We provide the LOWEST PRICE for the examination on the internet for taking the exam. Contact Us Now.
🙏 Help Us to Better Serve You: If you did not find any question or if you think any question’s answer is wrong, let us know. We will update our solutions sheet as much early as possible. Contact Us Now.
For taking the SEMrush Technical SEO exam to follow the below steps
👣 Step 1: Go here https://www.semrush.com/newacademy/exams/technical-seo-exam and sign in with your SEMrush account.
👣 Step 2: Start your exam.
👣 Step 3: Copy (Ctrl+C) the question from the SEMrush exam section and then find (Ctrl+F) the question from here and get the correct answer.
👣 Step 4: After completing the exam, you will get the SEMrush Technical SEO Certificate.
(Click on the questions, to get the correct answers)
✅ True or false? It is not possible to have multiple robots meta tags.
- False
- True
- True
- False
- True
- False
- True
- False
- True
- False
✅ What elements should text links consist of to ensure the best possible SEO performance?
- Anchor text, a-tag with href-attribute
- Nofollow attribute, anchor text
- a-tag with href-attribute, noindex attribute
- The number of links pointing at a certain page
- The value a hyperlink passes to a particular webpage
- Optimized website link hierarchy
✅ What are two the most commonly known best practices to increase crawling effectiveness?
- Multiple links to a single URL
- Using linkhubs
- Meta robots nofollow
- Interlink relevant contents with each other
- Internal, link-level rel-nofollow
✅ Choose three statements referring to XML sitemaps that are true:
- XML sitemaps must only contain URLs that give a HTTP 200 response
- It is recommended to use gzip compression and UTF-8 encoding
- There can be only one XML sitemap per website
- XML sitemaps should usually be used when a website is very extensive
- It is recommended to have URLs that return non-200 status codes within XML sitemaps
✅ Choose a factor that affects the crawling process negatively.
- Duplicate pages/content
- A well-defined hierarchy of the pages
- Content freshness
✅ Choose two statements that are false about the SEMrush Audit Tool.
- It can be downloaded to your local computer
- It can’t audit desktop and mobile versions of a website separately
- It provides you with a list of issues with ways of fixing
- It allows you to include or exclude certain parts of a website from audit
✅ What is the proper instrument to simulate Googlebot activity in Chrome?
- Reverse DNS lookup
- User Agent Overrider
- User Agent Switcher
- Less than ones without noindex
- Never
- Occasionally
✅ Choose two correct statements about a canonical tag:
- It should point to URLs that serve HTTP200 status codes
- It is useful to create canonical tag chaining
- Each URL can have several rel-canonical directives
- Pages linked by a canonical tag should have identical or at least very similar content
✅ Fill in the blank. It’s not wise to index search result pages because _____
- Google prefers them over other pages because they are dynamically generated and thus very fresh.
- they do not pass any linkjuice to other pages
- those pages are dynamic and thus can create bad UX for the searcher
- It is important to have all sub-pages of a category being indexed
- Proper pagination is required for the overall good performance of a domain in search results
- rel=next and rel=prev attributes explain to Google which page in the chain comes next or appeared before it
- Pagination is extremely important in e-commerce and editorial websites
- Using the X-robots-tag and the noindex attribute
- Introducing hreflang using X-Robots headers
- Using the X-robots rel=canonical header
✅ What does the 4XX HTTP status code range refer to?
- Server-side errors
- Client-side errors
- Redirects
✅ Check all three reasons for choosing a 301 redirect over a 302 redirect:
- The rankings will be fully transferred to the new URL
- Link equity will be passed to the new URL
- To not lose important positions without any replacement
- The new URL won’t have any redirect chains
✅ When is it better to use the 410 error rather than the 404? Choose two answers:
- When there is another page to replace the deleted URL
- If the page can be restored in the near future
- When the page existed and then was intentionally removed, and will never be back
- When you want to delete the page from the index as quickly as possible and are sure it won’t ever be back
✅ What is the best solution when you know the approximate time of maintenance work on your website?
- Using the 503 status code with the retry-after header
- Using the HTTP status code 200
- Using the noindex directive in your robots.txt file
- Using the 500 status code with the retry-after header
✅ Choose three answers. What information can be found in an access-logfile?
- The method of the request (usually GET/POST)
- The request URL
- The server IP/hostname
- Passwords
- The time spent on a URL
✅ Which HTTP code ranges refer to crawl errors? Choose two answers.
- 2xx range
- 3xx range
- 5xx range
- 4xx range
✅ Choose two statements that are right.
- It is not a good idea to combine different data sources for deep analysis. It’s much better to concentrate on just one data source, e.g. logfile
- Combining data from logfiles and webcrawls helps compare simulated and real crawler behavior
- If you overlay your sitemap with your logfiles, you may see a lack of internal links that shows that the site architecture is not working properly
✅ Choose two answers. Some disadvantages of ccTLDs are:
- They have strong default geo-targeting features, e.g. .fr for French
- They may be unavailable in different regions/markets
- They need to be registered within the local market, which can make it expensive
- 301 and 303
- 302 and 301
- 302 and 303
- <link rel=”alternate” href=”http://example.com/” hreflang=”x-default”/>
- <link rel=”alternate” href=”http://example.com/en” hreflang=”uk”/>
- <link rel=”alternate” href=”http://example.com/en” hreflang=”en-au”/>
- HTTP
- HTTPS
- FTP
✅ What are the two valid statements with regard to the critical rendering path (CRP)?
- The non-critical CSS is required when the site starts to render
- There is an initial view (which is critical) and below-the-fold-content
- CRP on mobile is bigger than on a desktop
- The “Critical”? tool on Github helps to build CCS for CRP optimisation
✅ Choose two optimization approaches that are useful for performance optimization:
- Avoid using new modern formats like WebP
- Asynchronous requests
- Increase the number of ССS files per URL
- Proper compression & meta data removal for images
✅ Choose the correct statement about mark-up.
- nvalid mark-up still works, so there’s no need to control it
- Even if GSC says that your mark-up is not valid, Google will still consider it
- Changes in HTML can break the mark-up, so monitoring is needed
✅ Choose a valid statement about AMP:
- Using AMP is the only way to get into the Google News carousel/box
- AMP implementation is easy, there’s no need to rewrite HTML and build a new CSS
- CSS files do not need to be inlined as non-blocking compared to a regular version
- A regular website can never be as fast as an AMP version
- rel=amp HTML tags
- hreflang tags
- Canonical tags
- Responsive web design
- Independent/standalone mobile site
- Dynamic serving
- True
- False