What is the role of Robots.txt file in SEO?

The purpose of robots.txt file is to tell the search engine crawlers not to index the folders or files that you don't want to see in Search engines.
You need a robots.txt file only if your site includes content that you don't want search engines to index. If you want search engines to index everything in your site, you don't need a robots.txt file at all.

Robots are often used by search engines to categorize and archive web sites. Also known as "Robot Exclusion Standard" and "Robots Exclusion Protocol".

A robots.txt file restricts access to your site by search engine robots that crawl the web. These bots are automated, and before they access pages of a site, they check to see if a robots.txt file exists that prevents them from accessing certain pages.

For websites with multiple subdomains, each subdomain must have its own robots.txt file. i.e) If you have robots.txt file with domain.com , and no robots.txt file with subdomain.domain.com, the rules that would apply for domain.com will not apply to subdomain.domain.com.

Below Syntax is used to allow all files in the website
User-agent: *
Allow: /

This is to restrict all files
User-agent: *
Disallow: /

To selectively restrict folders
User-agent: *
Disallow: /tmp/
Disallow: /pesonal/

To selectively restrict specific file
User-agent: *
Disallow: /personal/mybankdetails.html

Some crawlers support a Sitemap directive, i.e) allowing multiple Sitemaps in the same robots.txt file.
Example-
Sitemap: http://www.domain.com/sitemaps/sitemap.xml
Sitemap: http://www.domain.com/news/newsitemaps/newssitemap.xml