Thursday, January 7, 2021

Retrieving Google Place Data via REST Query to Google API

So my organisation had a need to get accurate latitude and longitude for all of its facilities and had determined that we didn't have a set of accurate records for this.

I determined that we could query Google for this using a web request in the format:


https://maps.googleapis.com/maps/api/place/details/json?placeid=PutYourPlaceIDHere&key=PutYourAPIKeyHere

So, having a list of all of the Google Place IDs for the organisations Google My Business setup, I wrote a little script to invoke a JSON call to Google and pull the latitude and longitude for each facility.

<#
# AUTHOR  : Sean Bradley
# CREATED : 08-01-2021
# UPDATED : 
# COMMENT : Uses Google Maps API Key to grab GMB Data from Web.
# Updates: 
# 1. 
#>
#Establish Logging
$RootPath = "C:\Scripts"
$Logfile = "$RootPath\GetGMBData.Log"
Start-Transcript -path $Logfile
#Establish variables
Write-Host "Setting some variables" -ForegroundColor Green
$InputFile = "$RootPath\GMBPlaceIDs.csv"
$OutputFile = "$RootPath\GMBData.csv"
$MapsKey = "PutYourAPIKeyHere"
$MapsURL = "https://maps.googleapis.com/maps/api/place/details/json?placeid="
Write-Host "Doing some preparatory file checks" -ForegroundColor Gray
$FileExists = Test-Path -Path $OutputFile -PathType leaf
If ($FileExists) {
Write-Host "Deleting last export" -ForegroundColor Gray
Remove-item $OutputFile -force  | Out-Null
}
# Get Input Data from CSV File 
$FileExists = Test-Path -Path $InputFile -PathType Leaf
if ($FileExists) {
Write-Host "Loading $InputFile for processing." 
$tblData = import-csv $InputFile 
}
else {
   Write-Host "$InputFile not found. Stopping script." 
   exit 
}
# Query Google for the required JSON Data
foreach ($row in $tblData)

    Write-Host "Getting Google Data for " $row.'Centre' " with Google Place ID " $row.'PlaceId'

$QueryURL = $MapsURL + $row.'PlaceId' + '&key=' + $MapsKey

$Webdata = Invoke-RestMethod $QueryURL -Method Get |
Select-Object @{Label = "Centre";Expression = {$row.'Centre'}}, 
@{Label = "PlaceID";Expression = {$row.'PlaceId'}},
@{Label = "Lat";Expression = {$_.result.geometry.location.lat}},
@{Label = "Lng";Expression = {$_.result.geometry.location.lng}}|
#Export to CSV
    Export-Csv -Path $OutputFile -NoTypeInformation -Append
}
Stop-Transcript | out-null

Monday, October 31, 2016

How to Stop Windows 10 Domain Computers reporting "Disable apps to help improve performance"

Create or modify a Group Policy Object that applies to the target computers.

Under Computer Configuration\Policies\Windows Settings\Scripts\Startup create a Powershell Script entry named "DisableStartupAppTask.ps1"

In the script, have the single line of code:

Disable-ScheduledTask -TaskName '\Microsoft\Windows\Application Experience\StartupAppTask'



Wednesday, June 3, 2015

sFlow: Sampling rates

From http://blog.sflow.com/2009/06/sampling-rates.html:



Sampling rates


A previous posting discussed the scalability and accuracy of packet sampling and the advantages of packet sampling for network-wide visibility.

Selecting a suitable packet sampling rate is an important part of configuring sFlow on a switch. The table gives suggested values that should work well for general traffic monitoring in most networks. However, if traffic levels are unusually high the sampling rate may be decreased (e.g. use 1 in 5000 instead of 1 in 2000 for 10Gb/s links).

Configure sFlow monitoring on all interfaces on the switch for full visibility. Packet sampling is implemented in hardware so all the interfaces can be monitored with very little overhead.

Finally, select a suitable counter polling interval so that link utilizations can be accurately tracked. Generally the polling interval should be set to export counters at least twice as often as the data will be reported (see Nyquist-Shannon sampling theory for an explanation). For example, to trend utilization with minute granularity, select a polling interval of between 20 and 30 seconds. Don't be concerned about setting relatively short polling intervals; counter polling with sFlow is very efficient, allowing more frequent polling with less overhead than is possible with SNMP.

Sunday, March 1, 2015

How to Convert a PFX (PKCS#12) SSL Certificate to Separate KEY and CRT Files


I've had to look this up a number of times, so I'm posting it here for posterity.

source: http://www.markbrilman.nl/2011/08/howto-convert-a-pfx-to-a-seperate-key-crt-file/

`openssl pkcs12 -in [yourfile.pfx] -nocerts -out [keyfile-encrypted.key]`

What this command does is extract the private key from the .pfx file. Once entered you need to type in the importpassword of the .pfx file.  This is the password that you used to protect your keypair when you created your .pfx file.  If you cannot remember it anymore you can just throw your .pfx file away, cause you won’t be able to import it again, anywhere!.  Once you entered the import password OpenSSL requests you to type in another password, twice!. This new password will protect your .key file.

Now let’s extract the certificate:

`openssl pkcs12 -in [yourfile.pfx] -clcerts -nokeys -out [certificate.crt]`

Just press enter and your certificate appears.

Now as I mentioned in the intro of this article you sometimes need to have an unencrypted .key file to import on some devices.  I probably don’t need to mention that you should be careful. If you store your unencrypted keypair somewhere on an unsafe location anyone can have a go with it and impersonate for instance a website or a person of your company.  So always be extra careful when it comes to private keys! Just throw the unencrypted keyfile away when you’re done with it, saving just the encrypted one.

The command:

`openssl rsa -in [keyfile-encrypted.key] -out [keyfile-decrypted.key]`

Notes:
- When you first extract the key, apply a new password (probably the same as you used to extract it) and then create an unencrypted key with the rsa command above
- Use an encrypted key file for NGINX otherwise it'll ask for the password every time it is restarted.
- Check the top of the extract .crt file for extra bits above the ----BEING... line and remove if necessary
- This certificated needs to be concatenated with the full chain of certificate authorities `cat domain.crt CA_bundle.crt > final.crt`
- test the cert with `openssl s_client -showcerts -connect www.domain.com:443`

Addendum:

To convert a PFX file to a PEM files:

`openssl pkcs12 -in [yourfile.pfx] -out [certificate.pem] -clcerts`

`openssl pkcs12 -in [yourfile.pfx] -out [cacerts.pem] -cacerts`

To convert a PFX file to a combined PEM file in one step AND remove encryption:

'openssl pkcs12 -in [yourfile.pfx] -out [decrypted.pem] -nodes'


Tuesday, November 25, 2014

Enforce Google Safe Search

So Google is no longer going to permit the nossl DNS trick that previously allowed organisations to disable SSL for searches so that Safe Search could be enforced.

Google Online Security Blog: An update to SafeSearch options for network administrators

The option that they are now permitting is a DNS trick to point users to forcesafesearch.google.com which will still be SSL enabled, but will not allow the user to disable Safe Search.

The only way to ensure this for all Google search engines is to create a DNS zone for each of Googles search domains.... all 193 or so.

Microsoft doesn't let you create a CNAME entry for the parent zone, but it does allow you to create a DNAME entry, so I came up with this script to create all of the zones.

The script, the google.txt file and some basic instructions can be found here.

(I added the length check because the original text file had some carriage returns at the end.)

As always, no responsibility is accepted for its use.

 param([string]$inputfile="google.txt")  
 #Check for the Input file  
 $FileCheck = Test-Path $inputfile  
 if ($FileCheck -eq "True")  
      {  
      write-output "Input file located"  
      }  
 else  
      {  
      write-output "Please supply file containing google zone list"  
      exit  
      }  
 #Process each line in the Input file and create a zone and DNAME record  
 foreach ($zone in Get-Content $inputfile)  
      {  
      $count=$count+1  
      $len = $zone.length -as [int]  
      if ($len -gt 5)  
           {  
           $zone="www"+$zone  
           write-output "Processing entry $($count). Creating zone for $($zone)"  
           dnscmd /zoneadd $zone /dsprimary  
           write-output "Processing entry $($count).Creating DNAME entry for $($zone)"  
           dnscmd /recordadd $zone "@" DNAME forcesafesearch.google.com  
           }  
           else  
           {  
           write-output "Zone data for entry $($count) too short. Not processing."  
           }  
      }  

Resize User Photos and Import them into Active Directory Accounts


Resize User Photos and Import them into Active Directory Accounts using PowerShell and ImageMagick.

This script looks in a specified path for photos named with the EmployeeID attribute of the users in a specified OU, resizes the images to the correct size and then writes the images into the thubnailPhoto attribute of the users Active Directory account.

As always, no responsibility is accepted for it's use.

 param([string]$searchbase , [string]$imagepath)  
 #Import the ActiveDirectory PowerShell module  
 import-module ActiveDirectory  
 #Check for Mandatory Parameters  
 if (!$searchbase)  
      {  
      write-output 'Usage: ADImages {searchbase} {imagepath}'  
      write-output 'eg. ADImages "OU=Staff,OU=Users,DC=orgname,DC=com,DC=au" \\fileserver\Userimages'  
      exit  
      }  
 if (!$imagepath)  
      {  
      write-output 'Usage: ADImages {searchbase} {imagepath}'  
      write-output 'eg. ADImages "OU=Staff,OU=Users,DC=orgname,DC=com,DC=au" \\fileserver\Userimages'  
      exit  
      }  
 #Check if the Searchbase exists  
 $OUCheck = [adsi]::Exists("LDAP://$($searchbase)")  
 if ($OUCheck -eq "True")   
      {  
      write-output "Found Searchbase $($searchbase)"  
      }  
 else  
      {  
      write-output "Searchbase $($searchbase) not found"  
      exit  
      }  
 #Check that the Image Path exists  
 $ImageCheck = Test-Path $imagepath  
 if ($ImageCheck -eq "True")  
      {  
      write-output "Found Image Path $($imagepath)"  
      }  
 else  
      {  
      write-output "Image Path $($imagepath) not found"  
      exit  
      }  
 #Check for the ImageMagick Conversion Tool  
 $ToolCheck = Test-Path ".\ImageMagick\convert.exe"  
 if ($ToolCheck -eq "True")  
      {  
      write-output "ImageMagick tool found"  
      }  
 else  
      {  
      write-output "ImageMagick tool not found. Download from http://www.imagemagick.org/"  
      exit  
      }  
 #Create the Thumbnail directory if it doesn't exist  
 $DirCheck = Test-Path ".\ADThumbs"  
 if ($DirCheck -eq "True")  
      {  
      write-output "Thumbnail directory already exists"  
      }  
 else  
      {  
      write-output "Creating Thumbnail directory"  
      New-Item -ItemType directory -Path .\ADThumbs  
      }  
 #Get an array of users from the Searchbase  
 $UserList = Get-ADUser -Filter * -SearchBase $searchbase  
 Foreach ($User in $UserList)  
      {  
      #Get the EmployeeID Attribute  
      $EmpID = Get-ADUser -Filter * -SearchBase $User -Properties employeeID | select -expand employeeID  
      write-host "Looking for Employee Photo for User $($User) with ID $($EmpID)"  
      #Tests to see if the UserImages file exists  
      $FileCheck = Test-Path "$($imagepath)\$($EmpID).jpg"  
      if ($FileCheck -eq "True")   
           {  
           #Retrieves JPG files of the target user from the UserImages share  
           $jpgfile = "$($imagepath)\$($EmpID).jpg"  
           $newjpgfileName = ".\ADThumbs\$($EmpID)-AD.jpg"  
           write-output "Scaling $($jpgfile) to $($newjpgfileName)"  
           .\ImageMagick\convert $jpgfile -thumbnail 96 -gravity center -crop 96x96+0-15 +repage -strip $newjpgfileName   
           #Write the thumbnail photo back to the AD user Account  
           $photo = [byte[]](Get-Content $newjpgfileName -Encoding byte)  
           Set-ADUser $User -Replace @{thumbnailPhoto=$photo}  
           }  
      else  
           {  
           #User Image file not found  
           write-output "Employee ID $($EmpID) not found in $($imagepath)"  
           }  
      }  

Monday, February 10, 2014

File Path manipulation in Excel

Saw this over at stackoverflow. Had to make a note of it for future reference.

http://stackoverflow.com/questions/18617349/excel-last-character-string-match-in-a-string


Let's say for example you want the right-most \ in the following string (which is stored in cell A1):
Drive:\Folder\SubFolder\Filename.ext
To get the position of the last \, you would use this formula:
=FIND("@",SUBSTITUTE(A1,"\","@",(LEN(A1)-LEN(SUBSTITUTE(A1,"\","")))/LEN("\")))
That tells us the right-most \ is at character 24. It does this by looking for "@" and substituting the very last "\" with an "@". It determines the last one by using
(len(string)-len(substitute(string, substring, "")))\len(substring)
In this scenario, the substring is simply "\" which has a length of 1, so you could leave off the division at the end and just use:
=FIND("@",SUBSTITUTE(A1,"\","@",LEN(A1)-LEN(SUBSTITUTE(A1,"\",""))))
Now we can use that to get the folder path:
=LEFT(A1,FIND("@",SUBSTITUTE(A1,"\","@",LEN(A1)-LEN(SUBSTITUTE(A1,"\","")))))
Here's the folder path without the trailing \
=LEFT(A1,FIND("@",SUBSTITUTE(A1,"\","@",LEN(A1)-LEN(SUBSTITUTE(A1,"\",""))))-1)
And to get just the filename:
=MID(A1,FIND("@",SUBSTITUTE(A1,"\","@",LEN(A1)-LEN(SUBSTITUTE(A1,"\",""))))+1,99)
However, here is an alternate version of getting everything to the right of the last instance of a specific character. So using our same example, this would also return the file name:
=TRIM(RIGHT(SUBSTITUTE(A1,"\",REPT(" ",99)),99))