Jeff Beard-Shouse's blog
Jeff Beard-Shouse — Thu, 01/12/2012 - 18:19
The DPDesktop 0.72 plugin for Web2Project does not work with Web2Project version 2.4 (maybe 2.3 as well, but not sure).
There are two ways to fix this. Manually edit the file or use the patch I supply (see below)
Manually edit the file
The issue is the newer versions of Web2Project use different objects for database and date than previous versions. Namely the "DBQuery" object is now "w2p_Database_Query" and "CDate" is now "w2p_Utilities_Date".
The file that needs to be modified is dpdesktop-0.72/service/classes/class.dao_web2project.php
Change lines like:
$q = new DBQuery();
$q = new w2p_Database_Query();
And lines like:
$date = new CDate();
$date = new w2p_Utilities_Date();
Use the patch file
Here is a patch file.
1. download and unzip the file
2. copy patch file to your DPDesktop plugin dir next to the file to patch (i.e. dpdesktop-0.72/service/classes/)
3. cd to that dir
4. type command:
patch < class.dao_web2project.php.patch
The < should be the less than character. For some reason my blog is output escaping the character even if I use entity characters
Jeff Beard-Shouse — Tue, 12/07/2010 - 16:32
So I was upgrading the kernel on my wife's laptop (its a long story) and I had an issue with vmplayer. The issue was it would not compile the vmware kernel modules. For the record the laptop has KUbuntu 10.04, I had just upgraded the kernel to 2.6.36-1, and I had grabbed the latest vmplayer (3.1.3 at the time). The error message was as follows:
make: Entering directory `/usr/src/linux-headers-2.6.36-1-generic'
CC [M] /tmp/vmware-root/modules/vmmon-only/linux/driver.o
/tmp/vmware-root/modules/vmmon-only/linux/driver.c: In function `init_module':
/tmp/vmware-root/modules/vmmon-only/linux/driver.c:425: error: `struct file_operations' has no member named `ioctl'
make: *** [/tmp/vmware-root/modules/vmmon-only/linux/driver.o] Error 1
make: *** [_module_/tmp/vmware-root/modules/vmmon-only] Error 2
make: Leaving directory `/usr/src/linux-headers-2.6.36-1-generic' make: *** [vmmon.ko] Error 2
make: Leaving directory `/tmp/vmware-root/modules/vmmon-only'
As it turns out this is because vmplayer apparently does not support 2.6.36 yet. According to a Gentoo bug report I found "With the 2.6.36 release it has been removed the ioctl() method from the file_operations structure. All the drivers should now call unlocked_ioctl() which doesn't run under the Big Kernel Lock (BKL)." So I applied the patch as root:
tar xf /usr/lib/vmware/modules/source/vmmon.tar
patch -p0 < /root/188.8.131.52-ioctl.patch
cf /usr/lib/vmware/modules/source/vmmon.tar vmmon-only/
After which vmplayer compiled the modules just fine. Thank you Gentoo community.
Gentoo bug report and patches can be found here.
I hope this will help someone else.
Jeff Beard-Shouse — Sat, 09/18/2010 - 16:08
A while back I had a person email me wanting some help getting their Samsung Galaxy S from Bell Canada working with their iPhone sim on the Telus network. The problem was he could make phone calls but no data networks where showing up.
While I admit that my knowledge of those networks and the particulars of which phone works where is limited, I said I would help as much as I could. I offered some advice, however he had figured it out after some help from the local Telus store employees.
Aside from a fluke timing issue where he tried it during the 2+ hours the Telus data network was down in his area, he also had to set the Access Point Name to "sp.telus.com". Steps to set the APN:
1. After unlocking your phone (see link) and inserting the new sim
2. go through settings->wireless and network->mobile networks->Access Point Names
3. add an APN (Name= Telus APN= sp.telus.com)
I figured I would write about our experience (mainly his experience), so it could help others who may have the same issue.
For more information about Access Point Names for Canadian carriers see this link at the XDA forums.
Jeff Beard-Shouse — Wed, 08/25/2010 - 21:16
I developed this Android application in connection with my friend daGentooBoy, who was one of the people at XDA forums that figured out where the codes were stored. Note: he was not the only person that worked on this see the forum link below for full list of credits. This Android application is meant for people who want to unlock their Samsung Galaxy S variant phones to work on another carrier. Root is not needed except if using Froyo (android 2.2).
- Europe I9000
- T-Mobile Vibrant
- AT&T Captivate
- Bell I9000 Vibrant
Step 1, Get the code:
Install the app below. When you run the app, it will search for the unlock and unfreeze codes on your phone.
After it aquires the codes, they will be saved to your sd card in a text file. You can also write them down or email them to yourself with the email button.
Step 2, Enter code:
Power down the phone
Put in a SIM card from another carrier
Power up the phone When it boots up enter the unlock code when asked
This information was adapted from xda-developers forums, so for more information and full credits please visit
Download SGS Unlock for Android 2.2 version (experimental, 2.1 users do not use)
If you like the app please donate. Unlock codes on the internet generally go for around $25 this app is provided to you for free.
Disclaimer: This application was not developed in any way in connection with my employer. My employer does not endorse this application in any way. This app is offered as is, I will not be held liable for any damages caused by the use of this application.
Jeff Beard-Shouse — Sun, 02/07/2010 - 17:43
Lately in my security research I have been playing with a program I saw in a security video from Black Hat. This program is called SSL Strip, it was written by Moxie Marlinspike and I have to say it is cool and scary. This article will be mainly focused on SSL Strip from a security professional prospective, however I will try to make the technical details plain enough that average user can pick up the general ideas. This may also mean that I cover some information that a security professional already knows, but hang in there because I feel I need to lay the ground work for the later points.
Disclaimer for the non technical:
I normally don't talk much about computer security to people who are not tech savvy, as it usually ends up with them dismissing me because "it can't be that bad" or they become extremely scared and/or paranoid. Now I don't like being dismissed, but I really don't want people to stop their online life just because of what I say. However I do think people need to be informed so they can make decisions about their online life. So please read on.
Most people don't type the protocol ("http://" or "https://") when they visit a website, the "s" in "https://" stands for Secure. The browser uses SSL (Secure Socket Layer) to make a secure connection to the server, while "http://" is not a secure connection. Most people will just type "gmail.com", "amazon.com", or "paypal.com". The browser is helpful to the user and adds "http://" (not secure) to the beginning. What this means is the browser defaults to being insecure. When a site that wants to have a secure connection gets a request that is not secured, it will redirect the user to the secured site ("https://"). As an exercise to the non technical user, watch the address bar for the "http://" and "https://" when you are logging into and using Facebook, online email, etc. Many sites will serve the login page as secured, so your username and password are protected, but drop back to an non secured connection after that. When this happens your online email, Facebook, etc, can be seen by people on the same network. For example if you are at a coffee shop or some other public wireless connection anyone else around can see what you are doing (emails, instant messages, wall posts, etc). In this case think of sending your emails on post cards through the mail, anyone who sees the post card can read it; however anyone here is everyone around you in that coffee shop / public space.
It also gets a little worse, while you are logged in the site accepts a temporary "password" from your computer to access the site, this is also know as a cookie. This temporary "password" is only good while you are logged in and for a period of time after you leave if you forget to hit the "logout" button on the site. However when a site drops back to being insecure, "http://", that temporary password is also sent over the non-secured connection. This means that anyone on the same network can see that temporary "password", again think coffee shop or other public WiFi connection. While that temporary "password" lasts anyone can use it to access your account as if they were logged in as you. During that time they can read and send any emails, instant messages, wall posts, etc. To put some fears at ease, all the banks and online payment systems I have seen use SSL, "https://", during the whole process. However as of the time of this writing most email providers, and social networking sites do not. The one exception is Google Mail, as they recently moved to all SSL by default. Two side notes: With all that said, it also appears that some sites don't even know how to do the username and password securely, West Star Energy, I am looking at you! Secondly, if you are a site administrator, and handle potential confidential information, please consider using SSL for the entire session. Its 2010 processing power is cheap, there is really no excuse.
And The Ugly
SSL Strip works because of the default behavior of the browser / user and the insecurity of the wireless network. A malicious user on a wireless network can trick other users into thinking that her computer is the wireless router and to send their internet traffic to her; this type of attach is known as "Man In The Middle" and can be achieved by ARP spoofing. The malicious user can then forward or relay this traffic on to the internet, and the unsuspecting users don't know that it is happening as everything appears normal. While she is relaying the data she can read and modify any non secured data; this is where SSL Strip comes in.
If a user on that network types in a website without the "https://" the malicious user can see that request and the direct the site will send back to make the connection secure. SSL Strip will receive the redirect but not relay it back to the user. SSL Strip will setup a SSL connection between itself and the site. Then relay any data it gets from the site (over the secured connection) back to the user over the non-secured connection. This means that the user will not have a secure but the server thinks the user does, since the malicious user has a secure connection to the site. This means that any information that would be secured by SSL, including username/password, bank account / payment information, can be read since it is no longer secured as it goes through the malicious users computer. SSL Strip also replaces the "https://" with "http://" in links from sites that The user entering an address is not the only attack vector of SSL Strip; it also removes the secure protocol from links that pass by. For example a web site that is not running over "https://", say the users email, has a link that says "https://bancofamerica.com" SSL Strip will replace it with the non secure version. Thus if the user clicks on the link, potential a legitimate email from their bank, the attack will comprise the users interaction with that site as well. This demonstrates the ability of chaining the attack to compromise more sites that might otherwise have been fine. A chain is only as strong as its weakest link. In these attacks the web sites view the user as having a secured connection, so nothing looks different to the web site. From the users prospective they do not have a secure connection; so the user could spot this attack. As Moxie Marlinspike points out there are usually some subtle differences in how the web page is displayed when not secured. For example, the in Firefox the user will not see the green background on the site icon (they just see the normal gray background like it displays for all non secure pages), the lock icon at the bottom of the browser in Firefox and IE does not appear, etc. For a more comprehensive list see Moxie's slides. In reality these changes are subtle and most users don't seem to notice.
As with most computer security failures the above scenarios show the chaining of failures. Among the more scary is the ARP spoofing, however in this article I would like to focus on the browser and web site side of things. Specifically I see two weak links. Firstly web sites that do not use SSL "end to end" to protect the users data; as seen above this can lead to exploitation cross site, especially if the first site is trusted (like email). Secondly, the browsers default behavior when dealing with an unspecified protocol.
In this day it is almost neglect for a web sites that deals with personal information (like email, instant messaging, personal communications) to not use SSL all the time. Most web sites that drop back to a non-secured connection will say they do so because of the expense in computation for SSL. To that I say, year 2000 called and wants it's excuse back. Sure there is computation with SSL, but it has been about 10 years we have much more computational power now. In my humble (or not so) opinion it should be an expense that comes along with running a web site that hosts personal information / communication. Don't get me wrong I am not saying SSL is perfect and a cure for everything, as there are may be security concerns in SSL that we need to work on, but lets at least make an attempt to secure peoples information.
Unfortunately just having web sites that contain personal information secure will not be enough. As going from any non-secure site to a secure site would then be suspect. We could say that we should educate users about the dangers of going from a non-secured site to a site that should be secure, but if we could even get users to recognize secure sites then this attack would be avoided. Educating users can not be the answer in of itself, until we get computer security to a place where it is as easy to follow safe practices as it is to reading road signs. So instead of educating users about going from non-secure to secure sites maybe we could mandate that all sites be secure. Probably not going to happen, we need a practical way to fix the situation. Which leads me to my second weak link point.
When I presented SSL Strip at a lunch seminar for my work, the question was asked "As software developers how could be prevent and/or help the situation for the average user?" This is a great question however, the answer is not as evident as it might appear. Firstly we have the problem of the default behavior of an unspecified protocol. Currently the browsers assume non-secured http by default, which is problematic. So we could assume secure first them drop back to non-secure if there was no response for the secure connection. However, this will not work as it would be trivial for SSL Strip to make the secure connection unresponsive so the browser would drop back to non-secure. You would probably have the same problem with any falling back scheme as SSL Strip could mimic any of it. Except there may be one possibility that I have heard purposed that I will discuss later. Another approach would be to display to the user better when they are on a non-secure site. However this has the problem that the user has to be educated as to when they should be on a secure connection. Also most UI whether annoying or not will be ignored by the user if they routinely see it when everything is fine. As software developers what we really need to know is when should the user really be on a secure connection and match that to what is happening. This would require some sort of Artificial Intelligence to understand what data should be kept on a secure connection and what data does not have to be. Alternatively we could say that all information entered by the user should be over a secure connection, but that again forces most web sites to run over SSL, which is probably not going to happen anytime soon.
As you can see really the only way to improve this situation (limiting our changes to browser / internet) is to make a browser that knows what should be secure, or to secure everything and deny non-secure content in the browser. Both of those solutions are not going to happen as they require either A.I. or every site on the internet to change. Alternatively we could educate users about these security issues. As security professionals we have been trying to educate users with limited success, in part because people just want stuff to work and not have to think about it. The debates about what should or should not be expected of users will continue to rage on and I don't see an end to that debate in the next 10 years.
The one solutions that I have heard about that may work is having a flag in the DNS record about whether the site should be secured that the browser would strictly obey. This of course would have to be a DNSSec record as a regular DNS record could be easily spoofed by SSL Strip or equaliviant program. This solution will have to wait for a full deployment of DNSSec, which is at least 2-5 years in the future. Until then all I can say is keep an eye on your address bar for the "https://" and do your best to educate as many people as you can. Good Luck.