Archive for the ‘Uncategorized’ Category

What is Knowledge?

December 6, 2013

Knowledge is a domain that encompasses contradictory traits; “knowing” describes knowledge of true or false statements to be absolutely true. Knowledge can be classified* in one of three classes: knowledge that I know I know. This is the knowledge that I am aware of, and do know very well. Knowledge that I know I don’t know. This is knowledge that I am aware exists, but I’m unfamiliar with it or I do not know it to be unequivocally true. Finally, the knowledge that I don’t know, I don’t know. This is the largest class of knowledge, and it is large simply because I cannot be an expert on every topic, and what I know spans a small percentage of it. So, I am going to claim that no one can have absolute knowledge, instead, we can only have relative knowledge that can be viewed as subjective or objective.

I hypothesize that there should always be knowledge that I don’t know I don’t know, and that is because if I believe that I know of the existence of all knowledge (knowledge I know I know, and knowledge I know I don’t know), then that means that there is nothing that exists beyond my knowledge or awareness. Since this claim doesn’t contradict the fact that there exists at least one thing that I don’t know that I don’t know, both claims can coexist at the same time. This means that the latter claim can possibly be true and cannot be proven untrue unless there is a source of absolute knowledge that can be used as a reference to my knowledge level. Thus, there is always the possibility that something exists that I don’t know I don’t know, and that cannot be proven wrong unless all knowledge is finite. However, if knowledge is finite, that means there exists a point in knowledge learning that I can reach where there is absolutely no chance that I do not know what statements I do not know. And since I’ve already shown above that I cannot verify the existence of things that I do not know I do not know, there is no way to prove the claim that knowledge is finite. Thus, knowledge is infinite because no matter how much I learn, the claim that there exists at least one thing that I do not know I do not know is still valid, albeit and due to it being intrinsically unverifiable.

Another argument to prove that no one can have absolute knowledge is that knowledge can be many versions of the truth. Different believers in different gods would have different knowledge versions of what a god is. If I possess absolute knowledge, then I must “know” all the different versions, which means that I have to possess all possible opinions of any topic, and “know” them to be true individually. This contradicts being opinionated or knowledgeable on a topic when you possess all possible opinions. This last statement can be simply proven by stating that such an opinion (of all opinions) can be a form of one opinion, with which there exists at least an opinion that disagrees with it (all opinions that “know” one version would implicitly disagree with all other versions), which forms one more opinion that is not included in this universal opinion, unless the universal opinion also includes opinions that do not agree with itself! Which shows self-admission of a weakly formed opinion, which doesn’t rise up to the strength of knowing something absolutely.

What is knowledge? Knowledge is “knowing” something absolutely. It could be viewed as subjective as it can be polluted with one’s opinion, although to that person, her opinion is truth. “Knowing” one’s god doesn’t mean that that specific god exists. I can “know” anything I want because it doesn’t have to be absolutely true, although to me specifically (in my opinion), it may be. It is easy to prove that knowledge is not always true, by looking at all religions and how they cannot all be true, especially given the contradiction among different aspects of those religions. Furthermore, if knowledge was always true, then knowledge does not exist today because I won’t know for sure if what scientists discovered today would remain standing (remain true) in future discoveries and I won’t know for sure that my understandings of the nature of things around me would hold. Unless I can prove that, knowledge does not exist until the definition of knowledge is relaxed and it can encompass true and false statements as well. Knowledge has to include both true and false understandings of a phenomena. For example, “knowing” Jesus Christ was resurrected, to be true for one person such as a Christian and to be false by a Muslim. Thus, “knowing” that Jesus Christ was resurrected can be simultaneously true and false in the domain of knowledge.

Also, another argument that knowledge doesn’t exist if it has to mean absolute correctness or trueness (invariant across people), is that my knowing of my god, and yours of your god mean that neither of us possess any knowledge, given that there are two versions of it and both cannot be true at the same time. So, knowledge can be false and subjective, and can have many versions of varying truthfulness.

How is knowledge different than opinion? Belief? Understanding? An opinion is a formulation or a result of a collection of knowledge elements or facts. A person formulates an opinion after acquiring knowledge of a certain subject. An opinion leads to an established or fixed belief that is no longer subjective to deviation. Understanding of something defines a weaker knowledge in the topic. “Knowing” is a most powerful understanding of something.

Knowledge is relative and subjective. In Plato’s cave, one learns a lot about other objects around him by observing. Sometimes what you observe is a minimal representation of the actual object or its original behavior. Maybe I just see shadows of the real object, and the object although fairly represented by its shadow, is significantly different in reality. My knowing of the object may converge or even diverge as more characteristics of the object are revealed. Furthermore, since the learning experience cannot be provenly bounded, there is no way to determine its ceiling, knowledge can only mutate and never be fixed. The moment knowledge is fixed for all topics, the moment we solve all problems in the world, observable and unobservable, and reach absolute knowledge.

Is there such a thing as absolute knowledge? Can it be attained by anyone? Can it be described? What is absolute knowledge? I know what knowledge is (defined above). And I know that knowledge does not always have to be true, correct, or complete. And I know absolute knowledge to be “knowing” all versions of everything. If all knowledge can be quantified and identified, then it must also be assumed attainable, which means that all knowledge has to be finite. I have already proven that knowledge is not finite and can never be proven to be finite due to the need of a proof to address the following contradictory statements:

1. All knowledge can be reduced to knowledge that I know I know, and knowledge that I know I don’t know after some learning.
2. There exists at least one thing that I do not know I do not know.

Thus knowledge is infinite since the first statement above cannot be proven given the second statement being unprovable since we cannot prove that all the things that we are not aware exist, do not actually exist. This implies that knowledge is infinite, and cannot be fully attained, and our knowledge is relative, and can and will encompass absolutely true or false things.

* The three classifications mentioned above are borrowed from a presenter at the No Fluff Just Stuff IT conference in 2009.


Summary of Instruction Level Parallelism Limitation

December 5, 2012

Limitation of ILP

1. Code is not mostly parallelizable. Upper limit on parallelization according to Amdahl’s law. ILP speedup may need to be in the 5 to 20 factor to be accepted generally given the VLIW processor complexity that needs to be introduced to achieve that [A]. During simulations, even with increased hardware available to processor, programs did not achieve a corresponding linear increase in performance by exploiting ILP [C].

2. Data dependency. Either stall and waste resources (and have a simple hardware), or:
2.1. Data speculation [B] – complexity of handling predictions (serialized and non-serialized methods), handling spurious exceptions, memory disambiguation/aliasing, and misprediction undoing penalty.
2.2. Forwarding – does not completely avoid stalling, and increases hardware complexity.
2.3. Instruction reordering (compiler and/or hardware) [C]. This technique is used to remove data dependency mostly originated by waiting for a memory operation to finish (memory operations account for 30-40% of total code [C]). This approach introduces hardware complexity such as reorder buffers to allow out-of-order execution but in-order retirement of instructions [C]. Memory delays could also be shortened through the use of I/D caches. However, those present their own sets of challenges and limitations (limited size, enforcing data coherency across processor caches, managing efficient cache replacement policies overhead, etc.) Generally speaking, there are many ways to accomplish instruction re-ordering:
2.3.1. Compilers can tag instructions with dependency flags (all instructions this current instruction is dependent on) such as in dependence processors (Dataflow). This can also be accomplished by the processor itself without the help of the compiler such as in sequential (superscalar) processors (although they may also use suggestions from the compiler but will not guarantee using those suggestions) [A].
2.3.2. Compilers tag how many instructions within the last M instructions that this current instruction is dependent on (such as in Horizon processor) [A].
2.3.3. Compilers can group instructions together in traces (this will include moving and duplicating instructions in various basic blocks of execution to allow executing (early) instructions as part of a basic block that would definitely be executed regardless of control decisions between. More code size, but higher performance overall.
2.3.4. Compilers can use register renaming and loop unrolling to remove data dependency across iterations, and speed up execution (sending separate loops to execute in parallel), this is referred to as software pipelining [A]. This adds a trade-off between how many loops to get higher throughput versus increasing code size (many loops) when some of those loops may end up unnecessary (iterations end earlier than unrolled code). Software pipelining goes beyond loop unrolling, it also brings code not dependent on looping outside of the loop (some to the top [prologue] and some to the bottom [epilogue] of the middle which is truly iterable code [kernel]), and then the true iterable code will be re-rolled. This type of compiler scheduling is called Modulo scheduling [A]. This approach can also cause register spilling (more registers needed than absolutely necessary for program execution), condition prediction (we need to speculate on whether the loop would be executed at least one more time to unroll – static prediction), true dependencies in data used in iteration i after being written in iteration < i, memory aliasing issues (will some pointers in one iteration write to the same address as in subsequent iterations?), etc.

3. Control dependency. Either stall and waste resources, or:
3.1. Branch prediction – complexity of handling predictions, handling spurious exceptions, and misprediction reversal (reservation station, instruction caches, and instruction commits are used to allow this).
3.2. Simultaneous branch execution (branch fanout) – more functional units, more registers, waste of resources that otherwise could have been used somewhere else, more management of what gets committed, and what gets to be discarded, etc.
3.3. Compilers and hardware working together to add more delay slot instructions, reordering of instructions.

4. Optimizing and parallelizing programs depend on knowing which basic blocks (or portions of the code) would run more frequently than others – execution frequency of basic blocks [A]. This can be achieved via analysis of the control flow graph of the code through the use of profilers. This is not a straightforward process, and it is highly dependent on the workload at runtime.

5. Basic block ILP limitation [A] can be mostly attributed to (aside from data dependency already mentioned above) limitations of functional hardware units due to similar operations within the block (albeit independent). For example, if the processor has 5 ALU units available to execute 5 adds in parallel (same cycle), but a basic block has 6 independent adds, then we need two cycles to execute instead of one. That is why VLIW instructions will include differing operations that can execute in the same cycle rather than many of the same operation (dependent on what is available in the hardware). Furthermore, ILP could be limited to 3-4 parallelizable instructions out of around 10 in a basic block (high limit) [F]. This is just a limitation of parallelism in the code itself.

6. Naming dependency (except for true dependencies). This can be resolved with register [C] and memory (except for ones with alias problem potential) renaming which is usually done by the compiler [B]. It is still limited by register availability and management (besides potentially introducing register spilling issues, we may also run into bandwidth issues in register files due to more reads and writes on the expanded list of registers).

7. A big reason why code is hard to parallelize is address calculations [D]. Eliminating or reducing long dependency chains of address calculations via compiler optimized code is seen to increase ILP. The paper [D] refers to some techniques in other resources to address addressing computation optimization, so I will have to do more reading.

8. Most of the data dependency that limits ILP comes from true dependencies (read after write dependencies) because the other two types (anti-dependencies [write after read] and output dependencies [write after write]) can be managed with renaming [E]. Those true dependencies come primarily from compiler-induced optimizations to support the high level language abstractions [E]. The compiler will introduce a high usage of the stack to reflect activation records corresponding to function calls and private variables, introducing a true dependency that would significantly reduce parallelization [E]. The true limitation is not based on the time it takes to allocate or deallocate the stack pointer register for an activation record, but based on the long chain of dependencies introduced [E]. [E] shows that even if all dependencies were completely eliminated, leaving only those that update the stack pointer, the total performance gained is nearly unchanged (controlling the absolute limit for achieving ILP). Removing stack update dependencies, however, have shown to provide significant performance gains even compared to perfect prediction and renaming – use heap based activation record allocation instead of stack allocation (accept higher overhead of allocating to enable multi-threading and true parallel execution of program traces). Other suggestions may include the use of multiple stacks or switching between stack-based versus heap-based at compile time based on the depth of the stack calling chain (the deeper the call stack, the more benefit gained from using heap-based activation record allocation) [E]. Some papers show that by increasing the window size, more parallelism can be exploited, while [E] shows that while that may be true, “distant” dependencies (beyond the window size) cannot be exploited with out-of-order instruction issue by superscalars, and other methods are needed under a reasonable window size limitation.

Things to look for or read about

1. Window size adjustments. How about compiler-controlled variable sizes?

2. Perfecting branch predictors, very important since it has a major impact on exploiting ILP. Most papers I read have done simulations under unreasonable assumptions such as perfect prediction, unlimited resources, unlimited bandwidth, unreasonably low penalty cost for mispredictions, ignoring spurious exceptions, etc.

3. Handling memory aliases at compiler time.

4. Choosing stack based or heap based activation record allocation at compile time. Maybe even considering multiple stacks – addresses true dependencies introduced by the compiler via deep chain of function call dependencies. A major performance increase can be gained here.

5. Clock rate per operation variation to increase throughput for faster operations. This can potentially increase a low ceiling on potential throughput even for embarrassingly parallel operations on the same CPU.

6. Generation of per-thread-based traces by the compiler, taking into account shared versus dedicated on chip memory, possible proximity to shared caches, etc.

7. Can traces be concurrent rather than parallel? Allowing for concurrent execution rather than parallel execution (allowing values to forward or be written to shared caches rather than waiting for a complete trace to finish before another one to start even on separate cores).

8. Maybe enforce convention by the compiler to allow predictable address fanout (range of memory address) for given functions or programs. For example, for dynamically allocated objects, the compiler may enforce via hints to the hardware how far apart they need to be on the heap, which will allow the hardware to take advantage of locality when loading a cache line from memory. Those can only be hints due to memory allocation and page replacement strategies, but a cooperation from the hardware and hints from the software can increase this utilization.

9. Exploit the nature of sequence analysis algorithms to optimize performance.

10. A hybrid processor approach to realize ILP and DLP (combining VLIW/superscalar and vector processors).


A. Instruction-Level Parallel Processing. Joseph A. Fisher, B. Ramakrashna Rau.
B. Limits of Instruction Level Parallelism with Data Value Speculation. Jose Gonzalez and Antonio Gonzalez.
C. Exploiting Instruction- and Data-Level Parallelism. Roger Espasa and Mateo Valero.
D. Limits and Graph Structure of Available Instruction-Level Parallelism. Darko Stevanovic and Maragaret Martonosi.
E. The Limits of Instruction Level Parallelism in Spec95 Applications. Matthew A. Postiff, David A. Greene, Gary S. Tyson, and Trevor N. Mudge.
F. Limits of Instruction-Level Parallelism. David W. Wall.

Are you efficient?

November 16, 2012

We excessively talk about efficiency (such an inefficient use of the word I may add) when we refer to things like an efficient car because it consumes less gas. We use the word everyday without a full appreciation of its meaning (compared to the alternative), and how it can be attained. However, once understood fully, addressing it in our everyday life can make a big difference. Efficiency can make our life easier, more productive, and allow us to, well, spend time writing blogs. Efficiency means an optimal use of your time. Not less, not more than what is absolutely needed at doing something. It doesn’t mean that you have to be working all the time. It just means that you are more productive so you can have more free time for yourself.

I keep hearing people blame technology for the diversion of our attention from things that matter. I am one that always uses his phone for work, maintaining a social presence, and browsing the Internet for “stuff”. However, as the things I do on the phone increased with the years, I ended up spending a lot of “wasted” time on my phone. For example, when I hear a “ding” when an email arrives, I immediately go to unlock my phone, and check my email. However, a lot of times the email turns out to be a spam or not so important. Unfortunately, that is not the end of it. I found myself, once I unlocked the phone, browsing other applications such as twitter, facebook, wordpress, etc. It almost seemed like I unconsciously felt that since I unlocked my phone, I was entitled to use it for a few minutes before it went into “locked” mode again. To generalize, I found myself going to my phone all too often, and wasting the few minutes to follow most of the time. All started because of a stupid email from travelocity regarding how it only cost $10 to fly round trip to Hawaii. One day I realized just how much time I spent on my phone doing nothing but wasting minutes checking things I already checked, and doing things I don’t really need to do. So, I made a decision one day that I will make my time on the phone more “efficient”. Being efficient can only happen with me unlocking the phone ONLY when I really need to. So, basically, I wanted to minimize how often I unlock my phone. But how could I do that? I decided to do a few things:

1. I started using custom ringtones to everything that I usually respond to on my phone. I customized ringtones specifically for family versus friends versus co-workers, etc. On iOS 6 you can customize ringtones per email account that you have. Not only that, I could group certain email addresses within a VIP group (a concept on the iPhone) and give that group a special ringtone. I distinguished notifications from various applications by assigning them different tones than default ones which are usually re-used across many applications. This customization change I applied to my phone allowed me to place the phone locked and only head to unlock it when I heard a tone that I knew was important enough given the time of the day. I noticed my browsing time on my phone was cut by multiple folds. I don’t spend less time on my phone, I just spend my time more efficiently.

2. I removed all notifications from applications that I don’t care much about.

3. I created filters on my incoming emails across all email accounts to archive and make read all emails that I am not too interested to read right away. Of course when you have spam emails coming in, try to see if you can unsubscribe from the sender (only if it is a legit sender!) so you can kill the excessive emailing at the root. Those filters allow me to hear less dings (when an email arrives) on my phone, which leads to less time on the phone.

Furthermore, I decided to take advantage of technology to make my overall time more efficient, even away from browsing my iPhone. So, basically I decided to use technology to save me from technology in ways such as:

1. I bookmarked all my useful and most frequently visited websites. This helps a lot. Once I open my browser, I have all my shortcuts to most frequently visited sites in my favorite list, and I can just click on the site that I want to load. Now, because I have a username/password to unlock my laptop, and another one to load my virtual machine, I can afford to not worry about saving my passwords on the browser, so I save all my passwords and I don’t have to log in with them manually. That speeds up my process of loading a page. If you have a lot of those pages, organize them into folders so they are easily accessible.

2. Un-clutter your paper space by changing all your statements to online statements. This way you can receive them via email or access them via their website. What I also do is add those websites and my login information (encoded) to the calendar entry for when the statement is issued (when the bill is automatically paid). This way I have all the information about a statement/bill right there on the calendar entry.

3. I write things like my driver’s license number, plate number, frequent flyers, etc. on an app on my iPhone so I can retrieve them when needed.

4. I use an app on my iPhone to sign incoming documents and re-email the signed document back to the sender without printing or scanning anything. I proudly say that I haven’t needed to print anything in months 🙂

5. You have a bunch of tasks to do? You keep thinking about when you will be able to get to them? Well, thinking too much about when you will be getting to a task will consume your time, and you will end up spending less time doing fun things, and at the same time you haven’t even started on your task. So, create a task list and keep adding items to it. Items that are deadline driven can be added directly to your calendar. If you know you will need to start working on a task X number of days before it is due, add a reminder to that calendar entry that will pop up X days before it is due. This way, you forget about the task completely, and when you will need to start working on it. Your calendar will let you know! Once you have that list, completely forget about everything you need to do. You will be notified when you will need to start working on one of those tasks.

6. I tend to drive a lot. With the crazy traffic in Chicago, it is a great opportunity to plan my tasks to take care of phone calls on my car bluetooth as I drive. It is a wasted time anyways. You may ask me: Don’t you want to listen to some music?? To which I answer: You must have not read the beginning statement of this paragraph because with Chicago’s traffic I have enough time to cover all my calls for the day and listen to music.

7. The single most useful special case I would like to share is the use of online tax preparation software like TurboTax. I have been using it since 2008 and I am in love with it. Not only do I get to learn a lot of things about how taxes work, but I have found it very handy when I needed to get my tax papers from two or three years earlier. All I have to do is log on to TurboTax and retrieve my taxes for any year, and download it.

8. Thinking efficiently pushed me to optimize other things in my life. For example, I like to read books, and there are so many books I would like to read. Specifically in theoretical physics, computers, astronomy, cognition and brain functions, etc. I, however, found myself taking way too long reading one book. I found the reason why, and shut up it is not because I am a slow reader! It is because sometimes I just don’t feel like reading about how the neurological system in our brain works, and instead I would like to read about quantum physics and how simultaneous particles synchronization works even when the two particles are across galaxies from each other! So, what I ended up doing is not reading about what I want to read at the time, and the book that I started reading is being put on hold. So, I realized that the most efficient use of my reading time is to start reading a book whenever I would like to read it, even if I end up having many books partially read. This way, depending on my mood, I can switch between books and overall I end up being more efficient in reading!

I am crazy you say? Well, applying the things above, and always looking to make my life more efficient allows me to have plenty of time to myself. I play basketball, soccer, and get to do other things outside of my full schedule. You may say that I am only able to do all those things because my schedule is less busy than yours, to which I reply no, my schedule is full of things to do and worry about. However, thanks to my task list, at the moment I have no worries at all and the only thing I am thinking about right now is how the hell I am going to end this post so I can go and have my tea!

Use technology to save you from technology and make your life simpler!

A Suggestion for the TSA

November 22, 2010

As you all have read, the TSA has approved new measures to increase security at the airport by forcing travelers to go through pat-down procedures that are highly intrusive, shall they refuse to go through the microwave machine. If you choose to refuse the pat-down procedure as well, you will be forced to pay a fine of $11,000 and potentially go to jail. The latter action is there to ensure that a terrorist won’t try to test the waters by bringing in a bomb with him, strapped around his waist under his underwear, and when he is selected “randomly” because he is a little too tanned to be innocent, he would then opts out of all checks and decides to leave the airport. That is a point, well taken.

Surprisingly, people did not respond too well to allowing a random person fondle them through their clothes, in the name of security. And to add to the shock, they certainly did not take well the idea of having an adult fondle their kids as well.  As we all know, nothing is more dangerous than a loaded child’s underwear or diaper. Under a lot of pressure, the TSA finally decided that kids under 12 years old will not be required to go through the pat-down process. This irresponsible action just opened the door to a whole new level of security cracks at the airport, through which, a terrorist baby (an argument made by Rep. Louie Gohmert (R-Texas) regarding how Muslims come pregnant to the US, deliver a US citizen, then take the baby overseas for terrorist training until the baby is ready to carry on a terrorist attack) can smuggle or be used to smuggle bombs through security checks.

I personally have no problem with added security. After all, it is there to protect us from harm, or in my case being a person of Middle Eastern descent, to protect me from myself. However, I see a lot of problems with the security approach pursued by the Transportation Security Agency. It is the reactionary approach that they follow after every terrorist attempt. One terrorist hides a bomb in his shoes, and we all have to take our shoes off for security screening. One terrorist hides a bomb in his underwear, and we all have to go through a testicle examination at the airport.  As a software engineer, I learned that patching issues after they occur rather than thinking of a bigger solution to all potential problems in the future is a big no-no. The reason why this approach fails is because there is always going to be a problem in the future to which I have to introduce a patch fix. However, if I take a step back and change my strategy overall, I may be able to change my code at the base and eliminate many potential problems. For example, British spies found out that AlQaida is planning to train radical doctors to implant bombs in women’s breasts. If they do that, the bomb can go undetected by all security measures that we have today. We are not worried about it today because no woman tried to do that yet. I heard women are very picky overseas and refused to implant bombs that would make their boobs be less than size D, or less round, or have less of a natural touch. That is why the process is dragging a little longer than what Osama had in mind. But it will come at some point, and a woman will get through the security checks, and if we are lucky, she will be caught and stopped by air marshals when smoke starts coming out of her breasts when she tries to trigger the bomb. “Her boobs were smoking, and I could think of nothing else but jumping on her and diffusing the bomb”, said the air marshal. Then we will hear about the boob bomber (don’t look for the domain name because I already purchased it). Now what? The only way to start checking for boobs that are “the bomb” is by training security guards to grab female traveler’s boobs and find potential bombs. The “random” search trigger may be “A woman with large boobs”. After all, women with smaller breasts may not have bombs, or maybe the bomb is too small to cause damage. “I will grope any big breasted woman for potentially carrying a bomb for the sake of my country and its national security”, will say any patriot working at an airport security check near your city. This whole approach of reacting to something that already happened won’t get you anywhere. It is reacting to something that failed or succeeded, but it won’t help against future attempts of different nature.

The TSA must change its strategy and find another way to increase our security, instead of reacting to everything terrorists do. If they choose to continue down this path to see how far they can stretch this, or us, I have a solution for them to quiet all whiners. A long term solution that fixes the whole problem at the root, rather than patching a solution every time an issue rises. If you want to pat-down all travelers, and you want to invade our personal privacy and potentially add more and more intrusive pat-down procedures, then your best bet is to introduce a little flavor into this process. Bring really good looking security agents (men and women), and allow the traveler to select (from a line up) the person they would like to grope them. From the opposite sex (or the same based on the traveler’s choice).  We should still have the option to ask for a private room like we do today, and additionally, we should have the option to ask for seconds, or thirds if we think we may be of danger to this great country. I think all Middle Easterners and all that have funny accents should have the choice to select more than one person to go through several pat-downs at the same time, or sequentially in a private room. When TSA introduces this measure, many people would change their minds about the procedure, after all, we are going through tough times and we cannot all afford paying singles at a club somewhere. I think this will not only be received favorably by many people, but would actually boost the transportation industry. After all, a trip to the strip club for a few lap dances and alcoholic drinks could cost you more than a trip to another city for a few days vacation, plus the free pat-downs. This will also save the TSA a lot of money because they won’t have to pay for expensive machines anymore. It will also allow them to introduce more and more intrusive measures and people would only receive them even more favorably than the previous less intrusive measures. And hey, maybe this type of procedure would push a terrorist to have a change of heart after going through this experience, after all, a bird at hand is better than 77 somewhere in heaven.