ZScaler

Problem:

All organisations that use zScaler have .pac files that are accessible via the web.

... let's see how many we can pull (with little effort)

z...Scale the problem

Downloading the Alexa Web top 1 million should give me some hits. no longer available for free :[

make it useful
strip out the domains and also create a ton of pac files that look similar to those seen in google's cache

Strip the domains out

cat top-1m.csv | cut -f2 -d , > domains.txt

Remove the domain extension
cat domains.txt | cut -f1 -d . > names_pre_append.txt

Add .pac (save as seperate file ... we need both for burp

sed 's/$/.pac/' names_pre_append.txt > million_Pacs.txt

Burp Intruder.

two points we care about the domain name in the path (pitchfork 1) and the domain.pac file (pitchfork 2)

dont forget to turn off payload encoding

hit it!

Impact?

I'm not really sure, but on a case by case basis it could differ, beg's the question how much detail have you put in the .pac file that can be used against your organisation, the trust lists could be used for social engineering and the white-lists could be used to educate unfiltered ingress egress ... depends what your motive is, pen-tester, bad guy, gov, foreign gov? - either way, very passive way of partial enumeration of targets, if they are using ZScaler pac's.

Resolve

white-list access to specific folders/files to the origin/relevant organisations ASN/IP Ranges, maybe present different pac files depending on the network they requested it from ?

Note: Although the Top 1 Million got me results, I wouldn't recommend it as that useful other than to demonstrate the Burp attack and validate that if you have good lists, you'll get good data, get as big a list as you can.


Response from ZScaler

I emailed Deepen, Desai, he was welcoming and level headed about the information I sent him, ZScalers Remediation was rapid -

Here it is:

here are some updates:

  • We were surprised to see 3 results on Google as search engine indexing has been disabled as part of the installation. We checked 'robots.txt' file on all PAC servers and that seems to be setup fine.

  • Enforcing ACLs is not possible in most cases due to the fact that most of the machines making use of the PAC files exist outside a customer’s network and their locations cannot be predicted ahead of time.

Here are the things we have agreed to implement:

  • 1 Auto-generate PAC file names in the UI making them difficult to predict. Currently, customers are allowed to chose the filename without any restrictions.

  • 2 We are doing an audit of all the existing customer PAC file names to ensure that they are not using weak file names like the ones you have found. Our customer support team will then work with the identified customers to fix this issue.


Can't get much fairer than that, issue is now resolved.