Thursday, January 7, 2021

Retrieving Google Place Data via REST Query to Google API

So my organisation had a need to get accurate latitude and longitude for all of its facilities and had determined that we didn't have a set of accurate records for this.

I determined that we could query Google for this using a web request in the format:


https://maps.googleapis.com/maps/api/place/details/json?placeid=PutYourPlaceIDHere&key=PutYourAPIKeyHere

So, having a list of all of the Google Place IDs for the organisations Google My Business setup, I wrote a little script to invoke a JSON call to Google and pull the latitude and longitude for each facility.

<#
# AUTHOR  : Sean Bradley
# CREATED : 08-01-2021
# UPDATED : 
# COMMENT : Uses Google Maps API Key to grab GMB Data from Web.
# Updates: 
# 1. 
#>
#Establish Logging
$RootPath = "C:\Scripts"
$Logfile = "$RootPath\GetGMBData.Log"
Start-Transcript -path $Logfile
#Establish variables
Write-Host "Setting some variables" -ForegroundColor Green
$InputFile = "$RootPath\GMBPlaceIDs.csv"
$OutputFile = "$RootPath\GMBData.csv"
$MapsKey = "PutYourAPIKeyHere"
$MapsURL = "https://maps.googleapis.com/maps/api/place/details/json?placeid="
Write-Host "Doing some preparatory file checks" -ForegroundColor Gray
$FileExists = Test-Path -Path $OutputFile -PathType leaf
If ($FileExists) {
Write-Host "Deleting last export" -ForegroundColor Gray
Remove-item $OutputFile -force  | Out-Null
}
# Get Input Data from CSV File 
$FileExists = Test-Path -Path $InputFile -PathType Leaf
if ($FileExists) {
Write-Host "Loading $InputFile for processing." 
$tblData = import-csv $InputFile 
}
else {
   Write-Host "$InputFile not found. Stopping script." 
   exit 
}
# Query Google for the required JSON Data
foreach ($row in $tblData)

    Write-Host "Getting Google Data for " $row.'Centre' " with Google Place ID " $row.'PlaceId'

$QueryURL = $MapsURL + $row.'PlaceId' + '&key=' + $MapsKey

$Webdata = Invoke-RestMethod $QueryURL -Method Get |
Select-Object @{Label = "Centre";Expression = {$row.'Centre'}}, 
@{Label = "PlaceID";Expression = {$row.'PlaceId'}},
@{Label = "Lat";Expression = {$_.result.geometry.location.lat}},
@{Label = "Lng";Expression = {$_.result.geometry.location.lng}}|
#Export to CSV
    Export-Csv -Path $OutputFile -NoTypeInformation -Append
}
Stop-Transcript | out-null

Monday, October 31, 2016

How to Stop Windows 10 Domain Computers reporting "Disable apps to help improve performance"

Create or modify a Group Policy Object that applies to the target computers.

Under Computer Configuration\Policies\Windows Settings\Scripts\Startup create a Powershell Script entry named "DisableStartupAppTask.ps1"

In the script, have the single line of code:

Disable-ScheduledTask -TaskName '\Microsoft\Windows\Application Experience\StartupAppTask'



Wednesday, June 3, 2015

sFlow: Sampling rates

From http://blog.sflow.com/2009/06/sampling-rates.html:



Sampling rates


A previous posting discussed the scalability and accuracy of packet sampling and the advantages of packet sampling for network-wide visibility.

Selecting a suitable packet sampling rate is an important part of configuring sFlow on a switch. The table gives suggested values that should work well for general traffic monitoring in most networks. However, if traffic levels are unusually high the sampling rate may be decreased (e.g. use 1 in 5000 instead of 1 in 2000 for 10Gb/s links).

Configure sFlow monitoring on all interfaces on the switch for full visibility. Packet sampling is implemented in hardware so all the interfaces can be monitored with very little overhead.

Finally, select a suitable counter polling interval so that link utilizations can be accurately tracked. Generally the polling interval should be set to export counters at least twice as often as the data will be reported (see Nyquist-Shannon sampling theory for an explanation). For example, to trend utilization with minute granularity, select a polling interval of between 20 and 30 seconds. Don't be concerned about setting relatively short polling intervals; counter polling with sFlow is very efficient, allowing more frequent polling with less overhead than is possible with SNMP.

Sunday, March 1, 2015

How to Convert a PFX (PKCS#12) SSL Certificate to Separate KEY and CRT Files


I've had to look this up a number of times, so I'm posting it here for posterity.

source: http://www.markbrilman.nl/2011/08/howto-convert-a-pfx-to-a-seperate-key-crt-file/

`openssl pkcs12 -in [yourfile.pfx] -nocerts -out [keyfile-encrypted.key]`

What this command does is extract the private key from the .pfx file. Once entered you need to type in the importpassword of the .pfx file.  This is the password that you used to protect your keypair when you created your .pfx file.  If you cannot remember it anymore you can just throw your .pfx file away, cause you won’t be able to import it again, anywhere!.  Once you entered the import password OpenSSL requests you to type in another password, twice!. This new password will protect your .key file.

Now let’s extract the certificate:

`openssl pkcs12 -in [yourfile.pfx] -clcerts -nokeys -out [certificate.crt]`

Just press enter and your certificate appears.

Now as I mentioned in the intro of this article you sometimes need to have an unencrypted .key file to import on some devices.  I probably don’t need to mention that you should be careful. If you store your unencrypted keypair somewhere on an unsafe location anyone can have a go with it and impersonate for instance a website or a person of your company.  So always be extra careful when it comes to private keys! Just throw the unencrypted keyfile away when you’re done with it, saving just the encrypted one.

The command:

`openssl rsa -in [keyfile-encrypted.key] -out [keyfile-decrypted.key]`

Notes:
- When you first extract the key, apply a new password (probably the same as you used to extract it) and then create an unencrypted key with the rsa command above
- Use an encrypted key file for NGINX otherwise it'll ask for the password every time it is restarted.
- Check the top of the extract .crt file for extra bits above the ----BEING... line and remove if necessary
- This certificated needs to be concatenated with the full chain of certificate authorities `cat domain.crt CA_bundle.crt > final.crt`
- test the cert with `openssl s_client -showcerts -connect www.domain.com:443`

Addendum:

To convert a PFX file to a PEM files:

`openssl pkcs12 -in [yourfile.pfx] -out [certificate.pem] -clcerts`

`openssl pkcs12 -in [yourfile.pfx] -out [cacerts.pem] -cacerts`

To convert a PFX file to a combined PEM file in one step AND remove encryption:

'openssl pkcs12 -in [yourfile.pfx] -out [decrypted.pem] -nodes'


Tuesday, November 25, 2014

Enforce Google Safe Search

So Google is no longer going to permit the nossl DNS trick that previously allowed organisations to disable SSL for searches so that Safe Search could be enforced.

Google Online Security Blog: An update to SafeSearch options for network administrators

The option that they are now permitting is a DNS trick to point users to forcesafesearch.google.com which will still be SSL enabled, but will not allow the user to disable Safe Search.

The only way to ensure this for all Google search engines is to create a DNS zone for each of Googles search domains.... all 193 or so.

Microsoft doesn't let you create a CNAME entry for the parent zone, but it does allow you to create a DNAME entry, so I came up with this script to create all of the zones.

The script, the google.txt file and some basic instructions can be found here.

(I added the length check because the original text file had some carriage returns at the end.)

As always, no responsibility is accepted for its use.

 param([string]$inputfile="google.txt")  
 #Check for the Input file  
 $FileCheck = Test-Path $inputfile  
 if ($FileCheck -eq "True")  
      {  
      write-output "Input file located"  
      }  
 else  
      {  
      write-output "Please supply file containing google zone list"  
      exit  
      }  
 #Process each line in the Input file and create a zone and DNAME record  
 foreach ($zone in Get-Content $inputfile)  
      {  
      $count=$count+1  
      $len = $zone.length -as [int]  
      if ($len -gt 5)  
           {  
           $zone="www"+$zone  
           write-output "Processing entry $($count). Creating zone for $($zone)"  
           dnscmd /zoneadd $zone /dsprimary  
           write-output "Processing entry $($count).Creating DNAME entry for $($zone)"  
           dnscmd /recordadd $zone "@" DNAME forcesafesearch.google.com  
           }  
           else  
           {  
           write-output "Zone data for entry $($count) too short. Not processing."  
           }  
      }  

Resize User Photos and Import them into Active Directory Accounts


Resize User Photos and Import them into Active Directory Accounts using PowerShell and ImageMagick.

This script looks in a specified path for photos named with the EmployeeID attribute of the users in a specified OU, resizes the images to the correct size and then writes the images into the thubnailPhoto attribute of the users Active Directory account.

As always, no responsibility is accepted for it's use.

 param([string]$searchbase , [string]$imagepath)  
 #Import the ActiveDirectory PowerShell module  
 import-module ActiveDirectory  
 #Check for Mandatory Parameters  
 if (!$searchbase)  
      {  
      write-output 'Usage: ADImages {searchbase} {imagepath}'  
      write-output 'eg. ADImages "OU=Staff,OU=Users,DC=orgname,DC=com,DC=au" \\fileserver\Userimages'  
      exit  
      }  
 if (!$imagepath)  
      {  
      write-output 'Usage: ADImages {searchbase} {imagepath}'  
      write-output 'eg. ADImages "OU=Staff,OU=Users,DC=orgname,DC=com,DC=au" \\fileserver\Userimages'  
      exit  
      }  
 #Check if the Searchbase exists  
 $OUCheck = [adsi]::Exists("LDAP://$($searchbase)")  
 if ($OUCheck -eq "True")   
      {  
      write-output "Found Searchbase $($searchbase)"  
      }  
 else  
      {  
      write-output "Searchbase $($searchbase) not found"  
      exit  
      }  
 #Check that the Image Path exists  
 $ImageCheck = Test-Path $imagepath  
 if ($ImageCheck -eq "True")  
      {  
      write-output "Found Image Path $($imagepath)"  
      }  
 else  
      {  
      write-output "Image Path $($imagepath) not found"  
      exit  
      }  
 #Check for the ImageMagick Conversion Tool  
 $ToolCheck = Test-Path ".\ImageMagick\convert.exe"  
 if ($ToolCheck -eq "True")  
      {  
      write-output "ImageMagick tool found"  
      }  
 else  
      {  
      write-output "ImageMagick tool not found. Download from http://www.imagemagick.org/"  
      exit  
      }  
 #Create the Thumbnail directory if it doesn't exist  
 $DirCheck = Test-Path ".\ADThumbs"  
 if ($DirCheck -eq "True")  
      {  
      write-output "Thumbnail directory already exists"  
      }  
 else  
      {  
      write-output "Creating Thumbnail directory"  
      New-Item -ItemType directory -Path .\ADThumbs  
      }  
 #Get an array of users from the Searchbase  
 $UserList = Get-ADUser -Filter * -SearchBase $searchbase  
 Foreach ($User in $UserList)  
      {  
      #Get the EmployeeID Attribute  
      $EmpID = Get-ADUser -Filter * -SearchBase $User -Properties employeeID | select -expand employeeID  
      write-host "Looking for Employee Photo for User $($User) with ID $($EmpID)"  
      #Tests to see if the UserImages file exists  
      $FileCheck = Test-Path "$($imagepath)\$($EmpID).jpg"  
      if ($FileCheck -eq "True")   
           {  
           #Retrieves JPG files of the target user from the UserImages share  
           $jpgfile = "$($imagepath)\$($EmpID).jpg"  
           $newjpgfileName = ".\ADThumbs\$($EmpID)-AD.jpg"  
           write-output "Scaling $($jpgfile) to $($newjpgfileName)"  
           .\ImageMagick\convert $jpgfile -thumbnail 96 -gravity center -crop 96x96+0-15 +repage -strip $newjpgfileName   
           #Write the thumbnail photo back to the AD user Account  
           $photo = [byte[]](Get-Content $newjpgfileName -Encoding byte)  
           Set-ADUser $User -Replace @{thumbnailPhoto=$photo}  
           }  
      else  
           {  
           #User Image file not found  
           write-output "Employee ID $($EmpID) not found in $($imagepath)"  
           }  
      }  

Monday, February 10, 2014

File Path manipulation in Excel

Saw this over at stackoverflow. Had to make a note of it for future reference.

http://stackoverflow.com/questions/18617349/excel-last-character-string-match-in-a-string


Let's say for example you want the right-most \ in the following string (which is stored in cell A1):
Drive:\Folder\SubFolder\Filename.ext
To get the position of the last \, you would use this formula:
=FIND("@",SUBSTITUTE(A1,"\","@",(LEN(A1)-LEN(SUBSTITUTE(A1,"\","")))/LEN("\")))
That tells us the right-most \ is at character 24. It does this by looking for "@" and substituting the very last "\" with an "@". It determines the last one by using
(len(string)-len(substitute(string, substring, "")))\len(substring)
In this scenario, the substring is simply "\" which has a length of 1, so you could leave off the division at the end and just use:
=FIND("@",SUBSTITUTE(A1,"\","@",LEN(A1)-LEN(SUBSTITUTE(A1,"\",""))))
Now we can use that to get the folder path:
=LEFT(A1,FIND("@",SUBSTITUTE(A1,"\","@",LEN(A1)-LEN(SUBSTITUTE(A1,"\","")))))
Here's the folder path without the trailing \
=LEFT(A1,FIND("@",SUBSTITUTE(A1,"\","@",LEN(A1)-LEN(SUBSTITUTE(A1,"\",""))))-1)
And to get just the filename:
=MID(A1,FIND("@",SUBSTITUTE(A1,"\","@",LEN(A1)-LEN(SUBSTITUTE(A1,"\",""))))+1,99)
However, here is an alternate version of getting everything to the right of the last instance of a specific character. So using our same example, this would also return the file name:
=TRIM(RIGHT(SUBSTITUTE(A1,"\",REPT(" ",99)),99))

Sunday, May 19, 2013

Memory Leak in Windows 8 Network Data Usage Monitoring Driver

Just thought I'd share this experience I had over the weekend, as it may save someone else many hours of troubleshooting.

I've been tinkering around with Windows 8 at home, even though I know there's little likelihood that we'll implement it at work any time soon.

While using my Windows 8 machine to copy a large amount of files from my NAS to a USB drive, I was experiencing lock-ups of my system. It wasn't a complete crash. The system would just become extremely unresponsive.

It soon became apparent that something was leaking memory. I was seeing the amount of memory being consumed skyrocket up to 100%, at which point the copy process would crash and system would stop responding politely. The task manager and performance monitor were not attributing the memory to any process however.

I tried using robocopy instead of Explorer copy. Same thing.

I tried updating the Realtek network driver, USB 3 driver and even the ASUS BIOS, (as they were all a few versions behind). Same thing.

I was getting to the point where I was figuratively scratching my head, so I tried booting into safe mode with networking. Aha! The memory usage stayed consistent and the copy performed just fine!

There are a number of network related drivers that safe mode don't load. DriverView showed that one of them is the Windows Network Data Usage Monitoring Driver ndu.sys that was introduced in Windows 8 and provides "network data usage monitoring functionality".

Disabling this driver by changing the start value to 4 in HKLM\SYSTEM\CurrentControlSet\Services\Ndu 
solved the problem.

Maybe this will be fixed when Microsoft releases Blue.

Monday, December 3, 2012

Using ICACLS to Grant Permissions on Folders

It took me a little while to work this out because I found the documentation here a little confusing and multiple interpretations of it seem to be floating around the net.

My goal was to grant a group permissions to access a folder, modify the subfolders and files within it, but not have the ability to modify the folder itself in any way. A pretty common requirement right? You would think some administrator somewhere would have come up with a clear set of instructions on how to do it, but I couldn't find any definitive answer that did quite what I wanted. Eventually, I figured out what I was doing wrong and scripted it myself.

So, the answer is:

icacls "Folder Path" /grant:r "AuthenticationRealm\GroupOrUser":(OI)(CI)(IO)(D,RC,S,AS,GR,GW,GE,RD,WD,AD,REA,WEA,X,DC,RA)

icacls "Folder Path" /grant "AuthenticationRealm\GroupOrUser":(RC,S,AS,GR,GE,RD,WD,AD,REA,X,DC,RA)

The first command replaces [/grant:r] any existing permissions for the GroupOrUser on all Subfolders and files only of the Folder Path and all of it's contents that inherit [(OI)(CI)(IO)] without forcing inheritance, and grants everything except Change permissions and Take ownership rights.

The second command grants GroupOrUser permissions to the Folder Path itself, but grants only those permissions that allow the GroupOrUser to be able to create files/folders and write data. They are not able to delete or modify the folder.

The permissions list in the first command can be modified to give Read Only access or Write Only (Dropbox) style access. If you're doing dropbox style access, it's sometimes a good idea to give the special identity CREATOR OWNER extra permissions so that submitters can modify their own work and it can also be a good idea to use Access-based Enumeration so that submitters cannot see other users submissions that may be in the same share.

There's probably a better way to do this in Powershell, but I haven't discovered it yet.




Monday, October 17, 2011

Enumerating Indirect Group Memberships

A colleague asked me yesterday if I knew how to get a list of all direct AND indirect group memberships that a user had. He wanted to use this to estimate the Kerberos token size for users with large numbers of group memberships as this can cause access problems if it exceeds set limits.

I vaguely remembered that I had something like this in a script I wrote to enumerate the members of a group both directly and indirectly. It uses the functionality of the Remote Server Administration Tools. There's also a hotfix to correct the output. I dug out the script and revised it to provide what he required.

In its simplest form, the command to run is:


dsget user <fulldn> -memberof -expand

For example:

dsget user "CN=testuser,OU=Staff,DC=company,DC=com" -memberof -expand

This will provide a list of group memberships in fulldn format. To simplify it to SAM group names you can pipe the output to another dsget command for the groups:

dsget user <fulldn> -memberof -expand | dsget group -samid

You can also simplify the input if you pipe in the dsquery command for the user:

dsquery user -samid <samid> | dsget user -memberof -expand | dsget group -samid

For example:

dsquery user -samid testuser | dsget user -memberof -expand | dsget group -samid 


Edit: You can use the same technique to list the members of a group:
dsquery group -samid <Groupname> | dsget group -members | dsget user -samid -fn -ln
Also, be wary of pasting one of these command strings in Outlook, as it has the tendency to automatically change hyphens to the longer "dash", which is an invalid character if you copy it out of Outlook and paste it to the command prompt.

Monday, September 19, 2011

DNS Suffix Search Order via DHCP

I was recently working on a new parallel domain with one of the members of my team and the issue of DNS Suffix Search Order came up. The search order had to be set to include the parallel domain, the primary domain and a number of other things.

I was adamant that the search order could be set by DHCP as well as by GPO, but I couldn't specifically remember the details. My engineer pointed me to this Microsoft Knowledge Base article that states:
The following methods of distribution are not available for pushing the domain suffix search list to DNS clients:
  • Dynamic Host Configuration Protocol (DHCP). You cannot configure DHCP to send out a domain suffix search list. This is currently not supported by the Microsoft DHCP server.
Fortunately, an engineer from another department came to the rescue with DHCP Option 135. This can be added in Windows Server 2008 as follows (I believe this originated in a TechNet post):

1. On the 2008 Server running DHCP, open the DHCP MMC.
2. Expand DHCP and choose the DHCP server name.
3. Right click on IPv4
4. Choose "Set Predefined Options"
5. Click on Add.
6. Name: "Domain suffix search order"
Data Type: String
Code: "135" (without the quotation marks)
Description: "List of domain suffixes in order" (without the quotation marks)
String: enter your search suffixes separated by comma with no spaces

sample1.com.au,sample2.net,sample3.org

7. Click onto the OK to save changes .
8. Exit the DHCP MMC and restart the DHCP Server Service.
9. Open the DHCP MMC again and now scope option 135 is a listed option.






Monday, March 21, 2011

Reverse DNS

I recently had a guy ask me how he could fix a corrupt reverse DNS.

Simple enough, I thought and proceeded to instruct him how to change the AD Integrated DNS zone to a "Standard Primary" DNS zone, then take the DNS file, import it into Excel and manipulate the data however he wanted. He could then just put the file back and reload the DNS zone and that's that.

I also told him how he could use DNSCMD to export the DNS data from an AD Integrated zone.:
dnscmd /ZoneExport FQDN_of_zonename Zone_export_file

He then started telling me he had problems locating the reverse DNS information and it was at this point my techie sense started tingling. He may not even have a reverse DNS zone (it is completely optional, but can be quite useful), or may actually be referring to his DNS resolver cache. (I haven't determined the answer yet).

Reverse DNS operates just like regular DNS, but instead of looking up an IP address using a hostname, you look up the hostname from the IP address. This can be very useful in easily determining which host is the source or destination of traffic, instead of finding the port on the local switch.

Reverse DNS zones use the network address in reverse notation and the suffix in-addr.arpa. So if your network's IP Schema is based on subnets of the private range 172.16.0.0, you could have a reverse DNS zone of 16.172.in-addr.arpa, which could contain entries for all hosts within all subnets on your network. Of course, if you have an extremely large network, you probably want to break this down further, such as 10.16.172.in-addr.arpa, etc.

So, if your host server.company.com has an (A) record of 172.16.10.99, he can have a pointer DNS record type (PTR) in the reverse DNS zone of 99.10.16.172.in-addr.arpa pointing back to its designated hostname of server.company.com.

Reverse DNS zones for IPv6 use the special zone ip6.arpa and store their loooong IPv6 addresses as a sequence of nibbles in reverse order in much the same way as the IPv4 addresses are stored in reverse order. So an IPv6 address of 2001:0db8:85a3::62cd will be stored as a PTR record as d.c.2.6.0.0.0.0.3.a.5.8.8.b.d.0.1.0.0.2.ip6.arpa.

A DNS resolver cache on a caching name server will resolve a query, even though they are not authoritative for the result, by making a query to the authoritative server on behalf of the client. The caching name server will then store this record for it's Time-To-Live (TTL) in a local cache. This will result in quicker resolutions and reduced load on Internet name servers. A corrupted resolver cache can simply be cleared and it will rebuild itself with use.

Wednesday, February 23, 2011

Importing Autocomplete File into Outlook 2010

This is something I answered over at Experts Exchange and thought I'd post here as well.

The .NK2 file used by Outlook 2003 and Outlook 2007 and is used to suggest addresses when you start typing in the recipients field is no longer used by Outlook 2010.

This file can be imported by Outlook 2010 and the contacts placed in the "Suggested Contacts" folder in the mailbox.

Copy the .NK2 file to the "C:\Users\%username%\AppData\Roaming\Microsoft\Outlook" folder (assuming the client is Windows 7)

Rename the nk2-file to the name of your mail profile:

     In the Control Panel, type "mail" into the search box.
     Run the Mail applet.
     Click on the Show Profiles… button.
     By default, your profile is called “Outlook”. So in that case you would call your file “outlook.nk2”.

Start Outlook with the /importnk2 switch:
     outlook.exe /importnk2

Outlook will import the NK2 data into the Suggested Contacts folder.

Thursday, December 16, 2010

Wake on LAN over the Internet

I was recently sitting at a desk at work with one of my colleagues and needed some information on my home computer. He watched as I turned on my home computer, established a remote session into it, got the information I needed and then shut it down again (I don't believe in leaving the computer turned on and wasting power).

"So that was interesting," said my colleague. "How did you set that up?"

The first thing to know about waking up your computer over the Internet is that not all home firewall/routers are going to be able to do it. Check the specs of your device. Along with the usual things like port forwarding, it needs to support static ARP entries. If it can, it's relatively straightforward.

First of all, set a static IP address on your target machine. Then go into the properties of the network card and enable Wake on LAN if it is not already enabled (It's usually enabled by default). You may have to enable Wake on LAN in the BIOS as well. Record the MAC address of your machine as you will need this to wake it (you can get this at the command prompt with an ipconfig /all ).

Next, you need to register the static IP address of your machine in the ARP table of your router. This is the part that some firewall/router devices targeting the home market are not going to be able to do. You will need to refer to your devices manual or support site to determine how to do this. You may not be able to do this while the network interface you are registering is connected to the network, so you may require another network interface or a second computer.

Finally, you need to set up a virtual server on your firewall with the following parameters:
  • Use the UDP protocol.
  • Use 9 for the internal port.
  • Use your static IP address of the target computer for the internal address.
  • Use any common port for the external port, but choose one not already in use. If you don't have a POP3 Mail server for instance, you could use 110.

I would also advise that you set up a Dynamic DNS. Many home firewall/router devices will be able to register their address automatically with one of these sites (for example: http://www.dyndns.com or http://www.no-ip.com.) This enables you to just remember a FQDN entry instead of an IP address and will also update if your IP address changes.

Now you should be able to turn off your computer and use another computer, or even a smart phone to send a magic packet to wake up the computer. I use http://www.depicus.com/wake-on-lan/woli.aspx

Just enter the MAC address of the computer, the IP address or FQDN, 255.255.255.255 as the subnet mask (as you are targeting a single host) and the port number you registered as the external port for your virtual server. Click the WAKE ON LAN button and your computer should turn itself on moments later!

If you have another virtual server set up to relay VNC or RDP to your machine, you can then control the machine remotely.


Cheers,
Sean

Thursday, November 11, 2010

iPhone emails missing message body

There's any number of blogs and forum posts on the web that probably already have this, but I discovered an issue with the way the iPhone email handles interaction with PDFs and iBooks today.

If you download and email onto iphone with an attached PDF and save that PDF to iBooks the email body in all messages will disappear.


The simple solution is to reboot the phone or kill the mail process.

I expect Apple will patch this soon.

Thursday, September 30, 2010

Reset Passwords for all User Accounts in an OU

I realise that there's plenty of scripts floating around the net that already do this, but for me this was simply an exercise.

Note: I haven't gotten around to testing it yet.


' PasswordReset.vbs
' Resets all passwords within an AD Container
' Version 1.0
' 27 September 2010


Option Explicit
Dim objRootDSE, objOU, objUser
Dim strTargetOU, strForceReset, strEnAcct, strDNSDomain, strNewPass
Dim intCounter, intUACval, intPWLval


' Change strTargetOU to location of user accounts
strTargetOU = "MyContainer"


' Change strNewPass to the new password
strNewPass = "Password123"


' Change strForceReset to "Yes" in order to force users to reset passwords
strForceReset = "No"


' Change strEnAcct to "Yes" in order to enable disabled accounts
strEnAcct = "No"


' Int Values 
' See Microsoft KB305144 for UserAccountControl values
' Setting PwdLastSet value to 0 forces password reset
intUACval = 544
intPWLval = 0
intCounter = 0


Set objRootDSE = GetObject("LDAP://RootDSE") 
strDNSDomain = objRootDSE.Get("DefaultNamingContext")
strTargetOU = "OU=" & strTargetOU & ", " & strDNSDomain
set objOU =GetObject("LDAP://" & strTargetOU )


For each objUser in objOU
If objUser.class="user" then
objUser.SetPassword strNewPass
objUser.SetInfo


If strForceReset="Yes"
objUser.Put "pwdLastSet", intPWLval
objUser.SetInfo
End if
If strEnAcct="Yes"
objUser.Put "userAccountControl", intUACval
objUser.SetInfo
End if


intCounter = intCounter +1
End if
Next


WScript.Echo "New Password: " & strNewPass & vbCr & "Accounts changed: " & intCounter _
  & vbCr & "Password Change Forced: " & strForceReset & vbCr & "Disabled Accounts Enabled: " & strEnAcct
  


Tuesday, September 7, 2010

SCCM: Excluding a directory structure from being inventoried.

A colleague asked me today how to exclude a directory structure on a single client machine from being inventoried by SCCM. The answer is to create a hidden sparse text file named skpswi.dat in the folder.

Thanks Tyriax for the question!

Wednesday, September 1, 2010

Office Autosave Locations

I always thought that the autosave for an Office file was created in the same location as the file. It turns out that this was because I almost always work with Office files on network drives.

When a new file is started a temporary file is created. This can be either in the windows temp directory, in "C:\ Documents and Settings\<username>\Application Data\Microsoft". If the file is stored on a network drive then it will be temporarily created there.

This temporary file will have a few different letters after the tilde (or squiggly line “ ~”) . These are good ones to look for to find some lost info. There are others, but these are the ones most likely to contain data that can be recovered.

Thursday, August 12, 2010

Subroutine to quit a VBS login script on Windows 2003/2008 servers

Sub DetectOS()

strComputer = "."
Set objWMIService = GetObject("winmgmts:" _
    & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")

Set colOperatingSystems = objWMIService.ExecQuery _
    ("Select * from Win32_OperatingSystem")

    For Each objOperatingSystem In colOperatingSystems
        If InStr( objOperatingSystem.Caption,"2003") <> 0 _
        or InStr( objOperatingSystem.Caption,"2008") <> 0  Then
            WriteLog "Detected Operating System: " & objOperatingSystem.Caption
                WriteLog "Script will not continue...."
            WScript.Quit(0)
        Else
                WriteLog "Detected Operating System: " & objOperatingSystem.Caption
                WriteLog "Script will continue....."
        End if
    NextEnd Sub

Sunday, July 4, 2010

Exchange and Server Naming

I worked for an organisation once that had a naming convention for its servers that constituted:
  • a country code (2 alpha)
  • a location code (3 alpha)
  • a server type code (2 alpha)
  • an instance number (2 numeric)
This was fine as naming conventions go (although these days I personally prefer location independent naming conventions as modern servers can so easily and quickly be relocated).

Unfortunately, this resulted in a server name of AUTHOMS01. You might look at this and think "Okay, no problem" and you would be right, unless you installed Exchange on the server.

We couldn't for the life of us figure out why Exchange would not complete SMTP transactions even though the answer was staring us in the face. It turned out that whenever the server communicated with a destination server, the transaction stopped whenever the AUTHOMS01 server presented itself.... because SMTP saw the first four letters of the server name as a valid SMTP command: AUTH.

So take care not to name your mail servers with a name that starts with a valid SMTP command!

Cheers,
Sean