What's new in this version:
- Adds option to context menu of 'Links by status' view. When selecting multiple and calling context menu, there are now options to 'recheck selected link urls' or 'recheck parent pages of selected links'. The latter is a v9 feature and is already in a number of single-selection context menus. It's a more comprehensive check and is useful if the link has been 'fixed' by being removed or its target url has changed
- Fixes some issues with renaming folders in websites folder list, dragging websites from one folder to another and dragging and dropping folders to reorder
- when parsing .css files for background images urls, now properly ignores anything /* commented out like this */
- patches bug which could have caused the odd link url to be missed or a spurious link url if certain unlikely code appears in the page
- reduces some false positives by retrying urls once using GET if they fail the first time with certain errors under the more efficient (and default for external links) HEAD
- Corrects default text / background colours in 'robotize' window and sitemap visualisation. In dark mode the default values weren't playing nicely
- Important fix: fixes bug which was causing urls to be reported bad where they were found as the src of certain tags (iFrame, Embed, Script) and were not quoted
- Fixes some unexpected urls appearing in Link views when the search box is used
- Fixes possible hang at completion of scan if archive feature is switched on
- Some improvements to the tasks (Results selection) table
- Fixes summary report having blank links pie chart
- Fixes summary report containing some incorrect SEO statistics (pages with duplicate titles / descriptions counted twice)
- Fixes fatal error if option to check linked files is switched on and if a css file doesn't answer UTF-8 encoding
- Fixes bug causing the crawl to not remain within the 'directory' it starts within. (since 8.3.3)
- Fixes problem of redirects being duplicated after autosave (or manual save) and reloading the Scrutiny data. ie status showing as "200 no error < 301 moved permanently < 301 moved permanently" rather than the correct "200 no error < 301 moved permanently"
- Adds context menu to table within link inspector. Contains Visit, Highlight, Locate (as per the buttons below, which work if you first select a page within the table)
- Engine now correctly ignores 'data-' elements within link tags. This was leading to some spurious results
- Further improvements to soft '404 functionality'. If target of link returns plain text rather than formatted html, Integrity now handles this. If the target page is formatted html and has a title, this is also now searched for the list of soft 404 terms.
- Improvements to site search. Adds case sensitivity option.
- Further small fix for a potential problem to pattern matching (as used in site search, blacklisting soft 404 etc.)
- Fixes problem of 'soft 404' search returning 'near matches'. It now searches literally for the string(s) you enter
- Ditto for site search, which may have also returned 'near matches' when using recent versions of the system. It also now performs 'exact match' searches
- Adds disc space check before autosaving data (which can be a large amount of data)
- Fixes a bug causing the crawl to stall under obscure circumstances (starting the scan at a deep url, where the deep url contains an asterisk character)
- Adds built-in error/debug console which can help us give support
- canonical url (if pointing to a different page than the page it appears on) has always been collected and shown in the SEO table, now they are also shown (if they point to a different page) as a link instance in the links results tables.
- Testing and reporting image urls within style sheets was listed in release notes for 8.2 but not fully implemented. Now fully working
- On systems 10.12+, new windows will open as tabs in a single scrutiny window. it's possible to drag these out if you want a tab to be a separate window, or 'Merge all windows' if you want separate windows to be tabs in a single window
Improvements to saving sitemap xml:
- better error handling and reporting
- when large sitemap is broken into multiple files, these are saved into a new folder at the location that the user chooses
- option added to prevent splitting of large XML file (There isn't a switch in the interface but must be set using the Terminal)
- Fixes a potential crash when exporting full report (and possibly the links flat view) under certain circumstances
- Fixes problem causing certain save/export and alert dialogs to not show up
- Fixes bug causing the results selection > Insecure content to not display correct information sometimes after saving data (or if autosave is on) and re-loading the data
- Changes some defaults for SEO (these are editable by the user in Preferences > SEO, but these values are the default for new users), In line with current thinking, a long title is one that's over 60 characters, and a long description is one that's over 200 characters
- Fixes problem, data wasn't always being cleared properly from the 'insecure content' list (if any existed) when user switched between saved data from different sites
- Search box for link results is now a literal full match
- Subtle improvement to html parsing relating to comments
- Better handling of SSI where the include happens within an html tag
- Changes the method of saving the data (during autosave or when manually saving all scrutiny data for a site). Faster and takes less disk space. Any data that successfully saved and loaded previously will still do so
- Some engine improvements re extracting canonical url
- Small fix that can prevent a loop in unlikely circumstances with certain options switched on - a 404 page containing a meta-http refresh.
- Some updates to the French localization
- Improves iFrame support
- Fixes problem with img alt text being truncated if it contains a single quote character
- Fixes problem causing 'http links found within https site' dialog to be shown more than once at the end of the scan (and autosave performed more than once too, although that wouldn't have been visible)
- Important fix for everyone. If a sitemap is provided publicly on the website *in xml format* then this could have prevented full crawling of the site, (due to deliberate rules about checking but not following urls when user wants to check urls within an xml sitemap)
- Fixes but that may have caused crash with certain urls
- Further work around the improvement to the meta http-equiv refresh handling
- Mojave dark-mode ready
- If crawl is started at a https:// page and a canonical of a secure page is insecure (http) then this is included in the report of insecure / mixed content pages. Previously this situation could be identified in the links data but wasn't included in the 'insecure/ mixed content' alert at the end of the scan.
- Fixes a bug which would have caused Scrutiny to stall at the first url (reporting that as a 200 but going no further) under an unlikely set of circumstances
- Different handling of a common issue: linkedIn urls returning a 999 code (even though the link may work in a browser). This is not a Scrutiny issue but common to all webcrawlers / testers. LI seems to detect the rapid requests and/or non-browser querystring and returns a non-standard 999 code. Scrutiny used to present this as a server error and count it as a bad link. Now it labels it as a warning, and does not count it as a bad link. This is because it is not necessarily a bad link, it just hasn't been possible to test it properly.
- Fixes issue with meta http-refresh not being observed if the page contains content with links. (The content was being parsed for links, in favour of the redirection being observed.)
- Fixes bug causing no data to show when Filter button on SEO table is set to 'Duplicate descriptions'
- (NB this version of Scrutiny is built against the 10.14 APIs which are still officially beta. This version should run fine on all supported systems. NB 8.1.4 was the last version built with an SDK version < 10.14)
- (officially beta due to the 10.14 APIs still being beta, which this version of Scrutiny is built against. This version should run fine on all supported systems)
- Better handling of a recurring 'Refresh' header field which could have appeared to leave the scan hanging when almost 100% finished
- Fixes a possible crash after exporting links to csv
- Fixes problem scanning a site locally and directory path contains a space or certain other characters
- Adds override for the built-in behaviour which excludes pages from the sitemap if they are marked robots noindex or have a canonical pointing to another page. These options are in Preferences > Sitemap, they should be on by default and should only be switched off in rare cases where it really is necessary, such as using the sitemap for a purpose other than submission to search engines (where you do want all internal pages in the file)
- Updates links within the app and dmg (support, EULA etc) to new https equivalents
- Fixes problem copying page url in 'by page' view
- Some fixes to 'recheck' functionality from context menus
- Now correctly handles quotes and return characters within link text when exporting links as csv
- Corrects the flat / hierarchical html sitemap export option. (was working the opposite way around to expected)
- Fix for http links being included in sitemap when 'consider http pages as external' is checked in Preferences. If a link where the target is internal redirects to an external link, this link is now considered by Scrutiny to be external rather than internal. Previously being considered internal was causing such a link to be included in the sitemap despite 'consider http pages as external' being checked. If a link where the target is internal redirects to an external link, this link is now correctly included in the 'Insecure content' report as redirecting to insecure (http://) page Fix to Links/By Link table which was not remembering its column information
- Fixes pages being excluded from the sitemap (reason given, canonical points elsewhere), under certain circumstances and with the 'ignore trailing slash' button unchecked (which is checked by default, should only be unchecked if really necessary).
- "Use Unicode normalization form KC " is now off by default, it's proved less helpful to have it on than off.
- Some fixes to 'mark as fixed' functionality (and re-saving the autosaved data after user makes such changes)
- Fixes problem with sitemap rules
- Fixes problem with 'update change frequencies' button
- Tidies up the sitemap transfer, a 'success' message added if the sitemap is transferred by ftp after saving locally, as it wasn't previously clear that this had been performed
- Fixes manual sitemap ftp - after generating the sitemap and showing the ftp dialog, the transfer wasn't being performed
- Reinstates the import v5/v6 website configs, but as an option rather than being performed automatically on first startup as v7 did. (Find it under File > Websites from earlier Scrutiny version)
- Fixes broken site sorting. Your list of sites (whether viewing by folder or a single list) is now correctly sorted by name by default, and sortable by name, url or last checked
- Repair to 'ignore and in Spellcheck preferences
- Some fixes to exporting of links as csv or html, fixes possible crash when exporting
- Fixes problem with exporting Sitemap table as csv
- Adds columns to SEO > Meta data table for
- Adds option to export SEO summary headlines as csv. (Helps create custom reports using Google Data Studio or other reporting tool )
- The summary is also included as a csv in the 'full report'
- Fixes weekday selector, which wasn't appearing correctly when selecting Schedule > Weekly
- Fixes Preferences > Links > Do not report redirects which was apparently not working
- Further measures to reduce 'false positives' (which is an important v8 feature). In this case, 403 (forbidden), may be returnedin some cases if useragent string is Googlebot or not a browser. Where a 403 is received, and the user has useragent string set to Googlebot or Scrutiny, then the url is retried once, with cookies, GET method and useragent string set to that of a regular browser
- Doubles the alt text buffer, alt texts of more than 1,000 characters were regularly being seen
- Some fixes to the reporting (full / summary / csv / pdf) - possible crash when generating that manually or as a finish action, and SEO radar charts
- Fixes spelling dialog so that it properly shows grammar details
- Fixes situation where there are no spelling results to report but are some grammar. Scrutiny was claiming from the tasks screen that there were no spelling or grammar problems to report and leaving the tables empty
- Fixes problem sometimes seen in meta data. Keywords and description could show spurious values depending on the order that the meta data appeared
- Fixes recent issue with code signing. For a short time builds will not have run without lowering of security settings
- Fixes bug that prevented full scanning if port number used in the starting url
- If a site config is deleted with a schedule still set, the schedule is now correctly removed before the site is removed.
- Fixes problem in isFinished causing multiple instances of the archive dialog
- Fixes problem with archive causing a hang (archive and browsable settings had to be on)
- Fixes percentencoding bug which caused crash under very unusual circs (an unusual character in the link href and unusual page text encoding)
- Fixes bug in defaults sync which might have caused some odd effects when creating new config / adding / deleting rules etc.
- Efficiency improvements, reducing pause at end of scan, noticeable with large sites (counting pages with possible duplicates)
- Improvement to IDN functionality, specifically if page contains percent-encoding within domain part of url, wasn't being handled properly.
- Sorts a problem with redirects, where a url is redirected to a url already in the list. Sometimes this could randomly result in an odd status being reported, (302 < 302 rather than the correct 200 < 302)
- fixes bug causing urls from a srcset attribute to not be reported if not preceded by a regular src attribute
- Restores ability to scan website locally
- Adds two new columns to SEO table - title length and description length - they're optional, you can switch them on using the column selector above the table, and they're sortable numerically
- Some internal updates relating to the rules changes in the last point release
- Fixes bug in 'highlighting', if the link occurred more than once on the page, only the first would be highlighted properly
- Adds ability to scan Wix site. No visible option for user, Wix site is autodetected
- We don't endorse or encourage the use of Wix, their dependency on ajax breaks accessibility standards and makes them difficult for machines to crawl (ie SEO tools and search engine bots) and impossible for humans to view without the necessary technologies available and enabled in the browser.
- Fixes bug causing potential crash if pages were excluded from sitemap for both possible reasons and user pressed the button to see the 'more info' button
- Fixes minor bug in column selector above certain tables, for French users
Some improvements to 'rules' dialog:
- Rules dialog opens as a sheet attached to the main window, rather than randomly positioned on the screen
- Adds 'urls that contain...' and 'urls that don't contain....' option giving much more flexibility
- (removes 'only follow'. The wording of this became confusing in certain cases (eg if you have more than one of those rules) and it's no longer required because it's the same as 'do not follow urls that don't contain' )
- Fixes bug preventing keywords from showing in SEO meta keywords column
- Some small improvements aimed at preventing occasional hang or crash when scan finishes
- Important update for French users, when using French localisation, blacklist rules ('Ignore links containing' etc) would have appeared not to save
- Some fixes relating to 're-check' from context menu items - fixes possible crash or apparent inaction after using that context menu item
- (When re-checking from the 'by page' or 'by status' views, no feedback is given to the user until the re-checking is complete - this fact is noted)
- Fixes problem with visualisation (.dot) export, some connections weren't being included under some circumstances
- When exporting .dot file, the 'cleaned up sitemap' is no longer marked 'recommended' and the full file will be the default. This ties in with imminent changes to a new version of Siteviz (which is the visualiser built into Scrutiny) which does the 'cleaning up' itself (ie removes links that go 'upstream'). It's now best that all links are included in the .dot file because siteviz (and in the near future, the visualiser within Scrutiny) will display the number of internal backlinks and colour nodes according to how many inbound links there are
- improvements re scanning a sote locally (improved handling of relative links '/example.html' (relative to the site root) Scrutiny now constructs that url relative to the directory of your starting file:// url which is most likely to be correct - previously constructed relative to drive root) enables sorting in the new h1's and h2's columns of SEO table built with greater level of optimization
- Change log not available for this version
- Fixes problem with discovering all frame urls within a frameset
- Adds detailed diagnostic window - shows details of the http request and response, data received, the values of important settings etc. for the initial URL. This window will be offered via a dialogue if the engine didn't crawl any or many links. It's also available if appropriate via a triangular button below the number of links found in the Links results and at any time via the Tools menu
- Some additions to the French localization
- Important fix re reporting. Particularly re the 'scan with actions' and 'perform actions' options where 'generate report' is selected in 'finish actions'
- French localization completed
- Fixes bug preventing SEO information (title, description) from being reported if starting with a text list of urls
- Adds 'redirects to here' column in SEO table. A count of the number of other urls that redirect (via 3xx or meta http refresh) to this page. The column is easily switched on and sorted to find the ones with the most. This is now an important SEO factor, Google considers a page to be a 'soft 404' if many pages redirect to it
- Adds option for spellchecker to only search contents of tags
- Fixes problem causing bad link count to be a little higher than the actual number of bad links. (Caused by certain external urls responding with error butreturning OK when automatically retried, the bad link had already been counted and wasn't reset)
- Important release for users of High Sierra
- Fixes problem that could cause incorrect link text to be reported
- Where appropriate, Integrity uses the HEAD method for efficiency. However, some servers incorrectly return a 404 or 5xx in response to a HEAD request. Such urls are now automatically retried using GET
- Adds case sensitivity when checking file:// urls there's a new option on the 'Advanced' tab of settings and options, case sensitivity is on by default.
- Fixes incorrect handling of base href = single forward slash, now correctly interprets as "relative to the public root"
- Fixes crash or hang under particular unlikely circumstances
- Fixes bug which prevented some srcset (2x etc) images from being found
- Increases stability and efficiency under certain circumstances
- Fixes minor problem with the 'delay' functionality (for throttling requests). The bug caused this setting to sometimes not be observed
- Adds options to ftp dialog (sitemap export) to use TLS, and adds field for port number (defaults to the usual 21)
- Fixes bug causing ftp dialog details to not be saved
- Some other small improvements such as validation of the directory field
- fixes issue with links not being found after self-closing script tag in body ()
- fixes issue with
- Fixes problem with 'Learn all selected' button on Spelling results window
- Small change that helps stagger multiple simultaneous requests
- Adds 'meta refresh' column to Links tables 'by link' and 'flat view'. The column is sortable, so makes it easy to find all of the meta refresh redirections on a site.
- If a link is redirected by meta refresh, the Preference 'don't report redirects at all, only the final status' is now correctly observed
- Fixes bug causing urls to be duplicated in the sitemap under certain redirection situations
- Adds French localisation to all of the context help (this is only a first step, all buttons / labels are being translated.)
- Fixes bug causing apparent random crash
- Improvements to thumbnail creation - with some sites the thumbnail could appear blank or incomplete
- Adds option (on by default) for the spell-checker & grammar checker to ignore content marked up as (html5 tags)
- Adds option (also on by default) for the spell-checker & grammar checker to ignore content marked up as and (html5 tags)
- Adds option to ignore or include image alt text within spell check (also on by default)
- The above options are in Preferences > Spelling
- Fixes bug causing Scrutiny to fall over in a peculiar set of circumstances (if canonical url has fewer than 7 characters and the 'treat http and https versions of url to be the same' option switched on)
- Some safeguards added - if the the starting url has whitespace or return characters pasted or typed, these are trimmed before attempting to start crawl
- whatsapp: links are now ignored (along with mailto: tel: etc) rather than incorrectly reported as bad links
- Allows generation of a sorted list of images by file size, and which pages they appear on (adds 'target size' column (optional) to the Links 'by link' and 'flat' views)
- Fixes a couple of issues with keyword analysis and adds some information to the Help files
- Fixes a problem with the preview of images without alt text (double-click a table row to open a preview of the image)
- Fixes problem of keyword count in headings not being displayed properly since the change from a single headings column to separate columns for h1, h2, h3, h4
- Some improvements around site search (if a list of search terms is pasted in from a windows-formatted text file, the different carriage return characters could cause some issues. Patched now)
- Adds 'Manage Autosaved Data' (access through Tools menu or cmd-5). Window shows all autosaved data, allows sorting, and allows deletion, either move to trash or immediate delete
- For new users, the Autosave feature is on by default
- Adds disc space check before Autosaving data, and an alert with advice if disc space is low
Improves insecure content reporting:
- Adds a new table of results - this shows all issues - secure pages which contain links to insecure ones, and pages with mixed content. It's expandable to show the details in all cases. It can be exported to CSV or HTML
- These results (if there are any) are available from the Results Selection screen, and are saved with autosaved / manually saved data
- Links > Filter > http: links and SEO > Pages with mixed content will work as before
- Fixes problem with exporting spelling results as csv or html
- Adds option for orphan check to scan a local directory and compare with the website scan. (as before, this will obviously only work for static sites)
- Adds 'redirect chain' report to SEO table
- Adds 'Redirect count' as a sortable column to the Links 'by link' view
- Adds 3D theme to sitemap visualisation
- Adds 'copy urls' to the context menu when multiple items are selected in all links tables. (cmd-C also enabled where multiple items are selected). a return-separated list of the selected urls is copied to the clipboard.
Some changes & fixes to the existing orphan check functionality:
- orphan data is now included in the autosave for the site
- Adds 'check for orphaned images / pdfs
- ftp directory blacklist now accepts file extensions for ignoring
Improvements to 'headings' within SEO table:
- collects and can display heading levels h1 -> h4
- Adds columns to SEO table to show h1, h2, h3, h4 separately (as before, each column shows a comma-separated list if there are more than one heading at that level)
- If you know that you won't need all those heading levels, there is a hidden preference to set the maximum level that you want - this can save resources Terminal: defaults write com.peacockmedia.Scrutiny-7 headingLevelMax 3
Other small fixes:
- Fixes double save dialog before export full report as pdf
- Fixes a problem that sometimes prevented 'Continue scan' from continuing properly (it would appear to check a few links and then stall)
- Fixes Sierra -specific problem with some alert boxes hanging.
- Fixes problems of incomplete information after a manual save and re-load of data
- Better handling of an unusual situation where 'content-type' isn't returned in a response header. In that case, Scrutiny now assumes html and attempts to parse it as such
- Improves built-in help files. Under the help menu you'll now find a link to the support form, the browsable version of the manual and a pdf (printable or savable) version
- animated dock icon has a 'sweep' which indicates progress
- fix to archiving functionality / browsable format for asp pages
- adds active licence key number to About box
- Small but important fix to the site search
- Fixes bad links not being saved to csv as part of the full report
- Adds standard 'Help book' manual. Find this under the Help menu. This will be under continuous review and improvement.
A number of fixes around the sitemap functionality, exclusion of pages from the sitemap and canonical URLs:
- Adds a button for viewing pages which have deliberately been excluded from the sitemap. It opens a table showing the URL, canonical URL and the reason that the page has been excluded. The table has context menu for copy URL and visit.
- Where a page has a canonical URL pointing to itself, this page may have been incorrectly excluded from the sitemap in the past if the canonical URL's capitalization is different from the page URL. This match is now checked in a case-insensitive way.
Further fixes to 'check links within PDFs' functionality:
- Fixes problem with link text reporting (within PDFs)
- Slightly increases link target area to increase likelihood of capturing the link text
Other small fixes:
- Fixes a problem with the column selection button on the Links Flat view
- Fixes context help within Preferences window
- Fix to 'check links within pdf documents' setting
- Fix to the 'page urls have no file extension' checkbox. If it has been set as a result of user answering 'page' to the question in the dialog box that pops up when you start the scan, and you ultimately quit without changing any of the other settings, then the setting may not have been checked when Scrutiny next opened
- Fixes problem with Preferences > Sitemap > Template, editing this in earlier versions of 7 caused odd behaviour.
- Fixes problem with the engine not always recognising an end comment where it looks like ---------------->
- Adds a useful context menu to the 'live view' table, containing 'copy URL' and 'visit URL'
- fixes problem with meta description being duplicated in SEO table if twitter meta description is present
- adds 'pdf documents' to the filter drop-down for link results
- adds update check
- adds 'search terms found' column to site search results table. (This makes things easier if you've searched for multiple search terms)
- tidies up some of the behaviour when adding, editing or deleting sites (window title, breadcrumb widget etc)
- tidies up display of site search results (sitemap controls now hidden)
- tidies up niggles with sitemap rules window
- First full launch of Scrutiny 7 - out of beta
- 'Document-based' - have as many windows open as you like to run concurrent scans, view data, configure sites, all at once
- New UI, includes breadcrumb widget for good indication of where you are, and switching to other screens. Also includes more logical flow - choose to run a scan, then choose how to view your results (Links, SEO, Sitemap, whatever).
- Organise your sites into folders if you choose.
- Autosave now automagically saves data for every scan, giving you easy access to results for any site you've scanned.
- Better protection when disc space is low, scan should stop before catastrophe happens. Each separate scan that's running will give an option to pause or continue regardless, when space on system disc ('/') reaches 750Mb
- Better reporting - summary report looks nicer, full report consists of the summary report plus all the data as csvs
- Fixes 'always use this directory' (when saving archive at the end of scan) - previously this was not remembered if using the 'convert to browsable format' option
- Adds 'ignore session id within querystrings' - allows you to not ignore the whole querystring, but ignore the session id within it. Useful for forums where querystring is important, but session id's cause crawl not to complete. This is a 'per site' setting and (in version 6) is within the Advanced window.
- Fixes obscure problem which occurred when canonical is given as just "http://" or "https://"
- Improvements to archiving in browsable format: handles querystrings and php sites (obviously php pages will then be html snapshots, not active php)
- Prevents a crash that could happen at the end of the scan (when progress bar finishes, before results are displayed)
- Much improved context help system. Discreet 'i' buttons beside many settings pop up some useful advice about the setting, with a button for the support form