[devdocsgjs/main: 586/1867] Update documentation




commit de6e42fa09239ff3c90998c807b619987ed4407e
Author: Jasper van Merle <jaspervmerle gmail com>
Date:   Sat Mar 9 03:13:47 2019 +0100

    Update documentation

 docs/Scraper-Reference.md | 39 +++++++++++++++++++++++++++++++++++++++
 docs/adding-docs.md       |  1 +
 2 files changed, 40 insertions(+)
---
diff --git a/docs/Scraper-Reference.md b/docs/Scraper-Reference.md
index d9fe8b4a..60d377d8 100644
--- a/docs/Scraper-Reference.md
+++ b/docs/Scraper-Reference.md
@@ -184,3 +184,42 @@ More information about how filters work is available on the [Filter Reference](.
     Overrides the `:title` option for the root page only.
 
   _Note: this filter is disabled by default._
+
+## Keeping scrapers up-to-date
+
+In order to keep scrapers up-to-date the `get_latest_version(options, &block)` method should be overridden 
by all scrapers that define the `self.release` attribute. This method should return the latest version of the 
documentation that is being scraped. The result of this method is periodically reported in a "Documentation 
versions report" issue which helps maintainers keep track of outdated documentations.
+
+To make life easier, there are a few utility methods that you can use in `get_latest_version`:
+* `fetch(url, options, &block)`
+
+  Makes a GET request to the url and calls `&block` with the body.
+
+  Example: [lib/docs/scrapers/bash.rb](../lib/docs/scrapers/bash.rb)
+* `fetch_doc(url, options, &block)`
+
+  Makes a GET request to the url and calls `&block` with the HTML body converted to a Nokogiri document.
+
+  Example: [lib/docs/scrapers/git.rb](../lib/docs/scrapers/git.rb)
+* `fetch_json(url, options, &block)`
+
+  Makes a GET request to the url and calls `&block` with the JSON body converted to a dictionary.
+* `get_npm_version(package, options, &block)`
+
+  Calls `&block` with the latest version of the given npm package.
+
+  Example: [lib/docs/scrapers/bower.rb](../lib/docs/scrapers/bower.rb)
+* `get_latest_github_release(owner, repo, options, &block)`
+
+  Calls `&block` with the latest GitHub release of the given repository 
([format](https://developer.github.com/v3/repos/releases/#get-the-latest-release)).
+
+  Example: [lib/docs/scrapers/jsdoc.rb](../lib/docs/scrapers/jsdoc.rb)
+* `get_github_tags(owner, repo, options, &block)`
+
+  Calls `&block` with the list of tags on the given repository 
([format](https://developer.github.com/v3/repos/#list-tags)).
+
+  Example: [lib/docs/scrapers/liquid.rb](../lib/docs/scrapers/liquid.rb)
+* `get_github_file_contents(owner, repo, path, options, &block)`
+
+  Calls `&block` with the contents of the requested file in the default branch of the given repository.
+
+  Example: [lib/docs/scrapers/minitest.rb](../lib/docs/scrapers/minitest.rb)
diff --git a/docs/adding-docs.md b/docs/adding-docs.md
index 03c6b87a..baafc59a 100644
--- a/docs/adding-docs.md
+++ b/docs/adding-docs.md
@@ -16,6 +16,7 @@ Adding a documentation may look like a daunting task but once you get the hang o
 9. To customize the pages' styling, create an SCSS file in the `assets/stylesheets/pages/` directory and 
import it in both `application.css.scss` AND `application-dark.css.scss`. Both the file and CSS class should 
be named `_[type]` where [type] is equal to the scraper's `type` attribute (documentations with the same type 
share the same custom CSS and JS). _(Note: feel free to submit a pull request without custom CSS/JS)_
 10. To add syntax highlighting or execute custom JavaScript on the pages, create a file in the 
`assets/javascripts/views/pages/` directory (take a look at the other files to see how it works).
 11. Add the documentation's icon in the `public/icons/docs/[my_doc]/` directory, in both 16x16 and 
32x32-pixels formats. It'll be added to the icon sprite after your pull request is merged.
+12. Ensure `thor updates:check [my_doc]` shows the correct latest version.
 
 If the documentation includes more than a few hundreds pages and is available for download, try to scrape it 
locally (e.g. using `FileScraper`). It'll make the development process much faster and avoids putting too 
much load on the source site. (It's not a problem if your scraper is coupled to your local setup, just 
explain how it works in your pull request.)
 


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]