Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/4/d355422041/htdocs/wp-content/plugins/profile-pic/profile-pic.php on line 489
Posts by bjaberle:
- Setting Up an iOS Pentest Lab
- Decrypting iOS Applications
- Reverse Engineering
- MITM on iOS
- Side Channel Leakage
- Introduction to Testing
- The Testing Mindset
- Test Design and Execution
- Testing Tools and Techniques
- Exploratory Testing
- Test Automation
- Test Reporting
- Defect Lifecycle Management
- Testing in Agile
- Security and Penetration Testing
- Developing as a Tester
Not sure what this means but seems as though Google Glass may have been thrown a lifeline. We shall see.
I found this paper while doing some research. And was fascinated (and scared) by what it revealed. You can measure acoustics by using the gyroscope on your smartphone. Which at first may sound cool but think about what that means in terms of privacy. Since, as of this writing, manufactures and development guidelines do not require users to grant permissions to access the gyroscope, they could potentially “listen” in on conversations to be used for other purposes.
We show that the MEMS gyroscopes found on mod- ern smart phones are sufficiently sensitive to measure acoustic signals in the vicinity of the phone. The re- sulting signals contain only very low-frequency infor- mation (<200Hz). Nevertheless we show, using signal processing and machine learning, that this information is sufficient to identify speaker information and even parse speech. Since iOS and Android require no special per- missions to access the gyro, our results show that apps and active web content that cannot access the micro- phone can nevertheless eavesdrop on speech in the vicin- ity of the phone.
Read it for yourself and be amazed and frightened!
A few years ago we had the opportunity to have the people from Lullabot come onsite to Float to spend a week consulting and work out some strategy initiatives. It was there that I met Andrew Berry, and through him was introduced to Selenium. At the time I was a manual tester only and the test code that he was showing me got me excited and scared at the same time. Now that I have a couple of years of actual UI test automation under my belt, I can start to investigate other areas of testing that are of interest. One of those things is IoT. But as a tester I am well aware of the security pitfalls that besiege the current IoT landscape. Andrew just published a very comprehensive article (with examples!!) of how to inspect an IoT baby camera to detect if there are any security breeches.
The people over at Security Innovation published a white paper on methods for performing penetration testing and runtime manipulation on iOS devices. Testers should consider investing time to learn how to perform this sort of testing as security and privacy issues are at the forefront of clients and stakeholders minds these days.
Some of the topics covered:
Looks like I have some reading for this weekend.
Max Woolf has a GitHub project that provides examples of strings that could make your app not play nice. These can be used for manual and automated testing. The main .txt file consists of newline-delimited strings and comments which are preceded with
#. The comments divide the strings into sections for easy manual reading and copy/pasting into input forms. For those who want to access the strings programmatically, a
blns.json file is provided containing an array with all the comments stripped out.
I just discovered this page on GitHub for testers. It has an amazing amount of resources for anybody wanting to learn the ins and outs of software testing. It is created by Paul Maxwell-Walters and is intended to help guide new testers into some sort of useful curriculum and collate some existing web-based resources into lists of links.
I could see this resource as being a very useful for anybody wanting to get into software testing or used as an on-bording resource for new hires. I will be spending a fair amount of time here myself.
We are overhauling bjaberle.com so that testing and QA awesomeness can spew forth. We have dusted off the cobwebs now we need to apply a new coat of paint. Fortunately for you…. since I have no dev environment, you get to see the changes live.
I have been taking a journey. And where I have ended up is the shore of an unfamiliar ocean. I have been slowly wading in its waters. My pant legs are damp with the stain of programming code. What began as an endeavor to learn the ins and outs of real time video mixing has thrown me into the world of computer programming languages and logic…..and I love it. It seems lately that I can not consume enough information. But the more I learn, the more I find out how much I have yet to learn (young grasshoppa). I think the evolution thus far is worth documenting. And that is what will be contained in following article.
I have always wanted to be a rock star. From a young age I felt that I could pull it off. I didn’t quite know how but knew that if I just kept practicing my instrument and moving towards that goal, I would get there. But as a bass player, I knew that my role was expendable and I needed to make myself an invaluable part to any band by bringing more to the table. That’s partly why I got into recording and running sound, I mean are you really gonna fire the guy who owns the sound system? About 1996 or so I also realized that the laptop would become the “electric guitar” for the next generation. Not only could I envision the evolution of produced and recorded music, but new types of rock stars would emerge. One of the things I envisioned would be the need for bands or artists to add an additional member. The “visualist” so to speak. This person, given the technology, would be responsible for the media being presented in real time during a performance. I saw this person manipulating images, text, video and lighting cues via midi in real time with the performance. Well, I think the technology is finally here to do such things. But to do it still requires a significant investment in learning technology. There is software out there like VDMX and VJamm that lets you do some pretty cool manipulation, but it isn’t until you couple it with a graphics development environment like Quartz Composer(QC) where you get some real groundbreaking results.
I bought Sams Teach Yourself C++. I decided on C++ for a couple of reasons. One, it seemed to be the programming language used in all of the FMOD examples and two, we were working on some interesting projects at the Iona group where the dev guys were doing pretty cool stuff with openFrameworks and Cinder, both of which use C++. A few days ago I found a beginners book for iPad and iPhone development at the library. I started going through that book and came to find that Objective-C is not the fire breathing dragon I once thought it was. So I am gravitating back to that while still pushing through my C++ book. On a recommendation from a colleague from Bradley, I got the Audio Programming Book from MIT Press. I quickly realized that this book was a little beyond my reach, but the content is slowly making sense to me. All of the code is in C so I have backed off that book for now. I plan on writing reviews on each of the books I am going through so stay tuned.
I have also been doing some remedial code scrubbing on some web projects for the Iona Group which is not rocket science but I am glad that the dev guys feel comfortable enough with me to let me tackle some of these jobs. I hope to share some these as well soon.
So it has been not quite a year now that I have been dabbling in the world of programming and only now do I feel that I am close to starting to make stuff happen on these wonderful (mobile) devices. I am excited to get some things made and shared with the world. I think I have some good ideas for apps and, like a 4 year old child excited to show his parents their latest finger painting, I am filled with giddy excitement to share. We shall see.
Audio is the unsung hero of most productions. And that truth remains when it comes to developing audio for mobile devices. Gamasutra has a great article discussing the challenges sound designers face when it comes to mobile. This article deals specifically with iOS devices. It is well worth reading. Here are the highlights:
Asset Size – With Apple’s over the air size limit of 20MB, even if you are lucky to get half of that, you are left with only 10MB. This should not be an excuse to produce shabby audio, but a reason to rise to the challenge and impress. You can still create great sound design with some creative looping and compression.
One way to create some variety in your soundtrack would be to create an overall bed of ambient music. It could have the basic structure; drums, bass, rhythm and the basic chordal structure. Melodic instances; guitar licks, piano, sax (that’s right saxophone) could all be broken into smaller, bpm matched, chunks and called back programmatically or randomly, thus giving some randomness and variance that our consumers so desperately crave.
Environment – How and where consumers listen to your apps vary. No matter where, the audio still has to transfer from the device to the users ear. Elements like an eerie sound track can easily get lost in bus station or restaurant. Moving towards sounds that are transient and short, but not annoying, can help the user still feel engaged in noisy spaces. But the challenge then is to make sure those sounds translate well when the user is wearing headphones.
Creating audio for mobile devices forces us as sound designers to do things that will make our music producing counterparts cringe with disapproval. You know those 15 cent ear-buds that came with the set of Bic razors your wife just purchased? Better bust those out and listen to your game/app with those, because that is what most of the consumers will be using. If you are lucky they might still have the $5 ear-buds that came with their iPhone. But don’t count on it.
Compressed Audio Files – iOS only allows playback of one compressed audio file at a time. So, if you want to have a soundtrack playing underneath all of your great sound design then it has to be uncompressed. If your developers want a new music track and ambience per level, then you are strapped with a whole new set of challenges. A one-minute-long PCM file at 24Khz is 5.8 MB. So the music and ambience alone amounts to 58 MB for 5, 1 minute loops for 5 different levels.
Now it is true that you can have more than one compressed file playback at a time if you use the software decoder, but this could be a hard sell to your developers especially if they are trying to maximize processing power. They need to keep their framerates rock solid. The idea of giving you some of their coveted processing power so your “click” sound can play during runtime is sure to send wafts of laughter through the assemblage of empty Mountain Dew cans and Cool Ranch Dorito bags that festoon the dev cube…. oh wait that’s the recording studio.
So all in all, iOS (mobile devices in general) provides some new hoops for sound designers to jump through, but with a little creativity and some planning, they can be the things that make us all better.