Wednesday 29 June 2016

Digital Marketing Interview Q & A

1. Why Chooses Digital Marketing Carrier ?
Digital Marketing Plays a vital role in the market.From mid scale to large scale organizations are looking to promote their websites and to grow the business.It is used to target ,measurable and interactive marketing of products & services using digital technologies to reach and convert leads to the customers.The key objective is to promote the brands, build preference & increase sales through various digital marketing techniques.Digital Marketing will grow in the coming period which will generate good opportunities.

2. What is website & How many websites in the world ?

A website is a set of related web pages and can be accessed by visiting the home page using a browser typically served from a single web domain.
We have recently more than One Billion Websites. 

3. What is WordPress & Benefits of WordPress ?

WordPress is free and open-source content management system (CMS). Simply speaking this means that with WordPress you can easily create and manage your website without any coding experience

Benefits:

Free - Anyone can download and use it to run a website. 

Easy to use - If you can deal with Microsoft Word, you’ll do fine. 

Customization - With dozens of free and premium themes and plugins, you can create almost anything with WordPress. 

Safe - Despite many talks, WordPress core is very secure. Most attacks on WordPress sites are made through badly coded third-party themes and plugins. So don’t believe it when people say WordPress is not secure. 

Supported - WordPress now powers more than 24% of the web. With the community this big, you can always find an answer to almost any WordPress-related question you might come across. Finding a developer for a WordPress task is also not too hard.

4. What is Robots Text File & Syntax ?

Definition :

Robots.txt is a file that is used to exclude content from the crawling process of search
engine spiders / bots. Robots.txt is also called the Robots Exclusion Protocol.

To find the errors in the website.robots.txt file can have a large effect on how search engines crawl your website. This text file is not required, but does provide instructions to search engines on how to crawl the site, and is supported by all major search engines.A robots.txt file is composed of disallow and allow statements that instruct which sections of the site search engines should and shouldn’t crawl.

Why to use robots.txt :

We prefer that our webpages are indexed by the search engines. But there may be
some content that we don’t want to be crawled & indexed. Like the personal images folder,
website administration folder, customer’s test folder of a web developer, no search value
folders like cgi-bin, and many more. The main idea is we don’t want them to be indexed.

 Is robots.txt file a creation solution :

 Standards based bots like Google’s, Yahoo’s or other big search engine’s robots listen to
your robots.txt file. This is because they are programmed to. If configured so, any search
engine bot can ignore the robots.txt file.

How to use robots.txt file :

Robots.txt file has some simple directives which manages the bots. These are:
Robots.txt example:

User-agent: * #allows all search engine spiders.
Disallow: /secretcontent/ #disallow them to crawl secret content folder

User-Agent: [Spider or Bot name]
Disallow: [Directory or File Name]

User-Agent: this parameter defines, for which bots the next parameters will be valid. * is a
wildcard which means all bots or Googlebot for Google.

Disallow:
 defines which folders or files will be excluded. None means nothing will be
excluded, / means everything will be excluded or /folder name/ or/filename can be used
to specify the values to excluded. Folder name between slashes like /folder name/ means
that only folder name/default.html will be excluded. Using 1 slash like /folder name means
all content inside the folder name folder will be excluded.

> Protect specific directories from Robots. 
> Protect specific pages from Robots.
> Prevent a specific Robots from Accessing your site.
> Allow only one specific Robot Access.
The Robots.txt file should always be there: http://www.domain.com/robots.txt

Syntax : 

 <meta content="Index,Follow"name="Robots"/>
<meta content="ALL,FOLLOW"name="GOOGLEBOTS"/>
<meta content="ALL,FOLLOW"name="YAHOOBOTS"/>
<meta content="ALL,FOLLOW"name="MSNBOTS"/>
<meta content="ALL,FOLLOW"name="BINGBOTS"/>
  
5. What is Canonical Form ? 

Canonical Form means original data.When two or more webpages having the same content,i.e it has a duplicate content.At that time we will use canonical url on the webpage to find the duplication, So that search engine can easily understand that the original content are somewhere else.
Syntax : 
<link rel="canonical" href="http://www.xyz.com=/" /> 

 Example :

Duplicate Page :

page:http://www.xyz.com/index.asp/abc.htm

<link rel="canonical" href="http://www.xyz.com/abc.htm" />


Proper Page
page:http://www.xyz.com/abc.htm

<link rel="canonical" href=http://www.xyz.com/>

Rel=canonical is supposed to be really easy to throw up 

6. What is Fetch as Google ? 

Fetch as Google is a tool that we can find in Google Webmaster. It enables that how Google Crawls the site.

Restrictions :

> Fetched urls are limited to the current site.
> It does not send cookies, login information.
> It does not follow a redirect.

Fetch Status : 

a) Success : The url has been crawled successfully.So that we can submit to the Google and then Google can index the url.In additionally,you can also check how the Google see our website by clicking the Success word.

b) Failed : Google cannot connect to the url. Google will try several times, eventhough if it cannot crawl, you must check website and url because there must be something wrong, therefore Google cant associate with the url.

c) Not Found : Google cannot find your url.

7. Difference between Google Analytics & Webmaster Tools ?


No comments:

Post a Comment

Contact Form

Name

Email *

Message *