Hey Everyone!
I just got back from two years abroad - 1.5 years living and teaching games programming in New Zealand followed by a 4.5 month trip through the South Pacific, Southeast Asia, China, North Korea and Japan. What a wonderful experience! I feel like a completely changed person now, and am even more unsure of where home is anymore! =P
I'm settling back in San Francisco - there's a lot to set up, but hopefully I'll get back to "normal" life easily.
As for photos and travel stories, there are many - too much to post, but as I'm back in the land of unlimited Internet and ambitious self-expression again, I'll try to get back to it.
I've now visited something like 40 countries now, so hopefully I'll start putting up photos that gives a good representation of the various cultures and places. However, my laptop just died, so I need to figure out a way to get my photos off my hard drive - but after a short delay, hopefully I'll get back to putting up more photos and stories!
Looking forward to getting back into the online communities!
- Mark
Wednesday, September 7, 2011
Wednesday, November 24, 2010
Troubled by Decisions? Relax - it Doesn't Matter!
I've come to an interesting conclusion: the more difficult a decision is, the less important it is! Although this may seem counter-intuitive, I haven't found a fully convincing counter-example yet.
The reasoning is as follows: if a decision is easy to make, that means that one choice gives an obviously better result than the other choices. If a decision is difficult to make, it means that either the choices are relatively balanced in terms of positives and negatives, or that there is enough uncertainty so that the probabilities of good versus bad results are approximately balanced. Note that in the uncertain case, further exploration of the problem may result in a decision that is easier to make. When I say "easier," it doesn't mean less exploration, it refers to making a decision once all possible are realistically considered. Also note that difficult decisions are actually more likely to have a lasting and large impact on the rest of your life - it's just that all of the results are about equally as rewarding (or uncertain). I suspect one can describe a relation for a (binary) decision to a first-order approximation be something like:
D = I/(1 + |B1 - B2|)
(ignoring the constants, whose values I suppose will depend on the individual)
Where:
D: Difficulty of the decision.
I: Impact that the decision will have. (This can ultimately be defined in terms of B1 and B2)
B1, B2: Expected final benefit from decision 1 and decision 2 respectively.
So if you are ever in the situation of facing a difficult life-changing decision, and have explored all possibilities down to the uncertainties of the results, and still find the decision difficult, don't worry about it, go with your gut instinct, and make the best of it!
Sunday, November 7, 2010
Happiness and Choice
I've always felt that people would be happiest with just the right level of choice. With too little choice, people feel constrained and controlled. With too much, people feel overwhelmed and uncertain. With the right level of choice, people feel they are in control, while at the same time knowing that they made a good choice rather than second-guessing their decision against the myriad of possibilities.
What would a graph of happiness (y-axis) vs. choice (x-axis) look like for a typical human being? My first guess would be that there is a single maxima, sloping to low levels near the origin, and sloping down towards a horizontal asymptote as choice goes to infinity. Perhaps there are multiple minima and maxima? Perhaps the graph is a hysteresis loop? Perhaps it is something more complicated, with the happiness being dependent on the rate of change of choice as well as the level of choice? Does anyone have any interesting studies they recommend for a good read?
Friday, November 5, 2010
Internet Brain Virus
For the longest time, I've been interested if it is possible to create a evolving computer virus that can change its capabilities and evade detection. I don't really know how anti-virus software works, so I may be completely wrong, but by installing multiple versions of itself with some random variations, perhaps a virus can slowly evolve over time and "learn" to survive.
A more recent idea that I find more interesting is a "brain" virus. What if a computer virus was created that infects computers, installs a back door access mechanism, and sets up a small neural node cluster on the machine (the exact layout of the neural net and edge weights can be pre-determined, with a few random perturbations thrown in on each install). The virus also installs some simple scanning, communications, and simple goal evaluation software, with some randomisation as to the exact settings of these modules. A virus can install multiple versions of itself on a host machine.
Once the virus is established on the system, it scans the host's contents as well as spending some of the time scanning the Internet (perhaps a specific set of addresses) and passes the collected information through the neural net and scoring algorithms. With certain scores, this information gets sent randomly to a set (the nearest set, a predetermined set, or some "highly scoring" set) of neural nodes, some installed on other machines. In this way, all of the infected hosts' information is available to all other hosts in a pre-processed and pre-selected format, and the neural nets will evolve using this information. Each program will also maintain a tiny database of important processed information that it decides to keep.
Of course, there are many details not worked out, but the idea is to turn the Internet into a giant brain, with each infected machine representing a small portion of the "organism." If some hosts are taken offline, the remaining nodes will pick up the slack, and with enough information replication, no real damage is done. As one of the main scoring algorithms will be self-preservation, hopefully the brain will learn how to protect itself over time so that it becomes a permanent fixture on the Internet. The Internet space will then be shared with an "intelligent" "organism," and perhaps with software upgrades and evolution, humans can even start having meaningful interactions with this "brain." I wonder if this is ever possible. Imagine harnessing just a tiny part of the processing power and information contained in all computers connected to the Internet, and having those resources go towards a seemingly omnipresent intelligent artificial brain that can evolve and adapt!
Monday, November 1, 2010
Productivity and Population
What is the goal of a civilisation? Is it to maximise production and population? At first thought, this seems to be the goal, since with more productivity come more goods and luxuries. But at what point is it not worth it? Our race is at the most productive ever, due to the new technologies, but are we really much more happy? Perhaps what needs to be done is to work less and enjoy life more - although this could be bad for a civilisation if a more productive civilisation decides to take over a less productive one. I suppose it is all about balance.
What if we take productivity to the extreme? Eliminate all non-human life on Earth other than a few of the most productive crops and human symbionts. Produce all nutrition synthetically, and use the productive crops for calories. Each individual can be allotted a daily portion, perhaps a pre-mixed drink containing all the daily nutrition requirements, plus some drugs to keep the individual comfortable and content for the day. Saving the energy for human use rather than having it being wasted on life on Earth that does not contribute to human production will help increase production to extremely high levels. With more people (all controlled and made content by drugs), the amount of new ideas generated will also increase. Would this be a bad society? Why or why not? If everyone is happy and productive, is this the perfect society? Other than happiness, productivity, and fertility, what else is necessary for the human experience? Although this hypothetical society satisfies all of those points, something doesn't seem quite right about it - what is the missing element?
On the other hand, perhaps we only think it's wrong because we grew up in our society - what would people from a society like the hypothetical one outlined above think about our society? Would they view us in the same way we view cavemen? Would cavemen view our society with the same discomfort that we view the above hypothetical society? After all, our society use drugs excessively all the time to make us happy and comfortable, such as pain killers, alcohol and tobacco. Our society have eliminated, or are trying to eliminate life forms that get in our way, such as smallpox, malaria, mosquitoes, etc. Our society have selected only a small number of productive crops to maintain, while letting the majority of less productive plants and animal species become crowded out by the life forms that we have chosen to be most productive for human life.
What is the right balance, what should we be aiming for, and where are we headed?
Saturday, October 9, 2010
Not Having a Particularly Good Week
I'm not having a particularly good week this week - it seems like everything has gone wrong/worse this week for me - on all fronts! One of the very few times when I feel like I am either taking a step in the wrong direction or not making any progress in all categories. I won't provide any details, but:
- Career-wise, things are not going well this week.
- Academic-wise, things are not going well this week.
- Finance-wise, things are not going well this week.
- Health-wise, things are not going well this week.
- Life-view-wise, things are not going well this week.
- Social-wise, things are not going well this week.
- Creative-wise, no real progress has been made in a while, so no change there.
- Travel-wise, things have not improved since going downhill dramatically in the past few months.
- Personal project(s)-wise, things have not gone so well for me this week.
So... not a particularly good week for me. Let's hope that next week is better...
Wednesday, October 6, 2010
Consciousness Understanding Consciousness - For Computer Scientists!
I've always mused at the idea that perhaps we will never understand our own conciousness. Perhaps there are different levels of conciousness, and only higher levels of conciousness can understand a lower level. For example, maybe a plant (or a computer!?) has a "conciousness" of some sort, and we can understand how a plant functions because we are at the higher level of animal "conciousness," and a being at a higher level of "conciousness" than us can fully understand how we function (What if even inanimate objects have some form of "conciousness," what ever that may be?). Well, I won't get into discussions about this here, since I know very little about conciousness and I don't really have any way to argue for any side.
I recently came across some interesting computer science problems that reminded me of this view - but applied completely to computers! If you are unfamiliar with computer science, it may be a good idea to look up Turing Machines and the Halting Problem before reading the next part.
There is a fundamental problem, called the Halting Problem on Turing Machines that are not solvable by Turing Machines, which can solve all computable problems (read: anything a computer can do). A Zeno version of the Turing Machine, a Turing Machine that doubles its speed of computation at each step, can perform an infinite amount of Turing Machine steps in just two steps due to series convergence. Thus, a Zeno Turing Machine can actually solve the Halting Problem of a regular Turing Machine! However, it is unable to solve the Halting Problem for itself or other Zeno Turing Machines. From what I can tell, this chain seems to go on forever - a Zeno Zeno Turing Machine can solve the Halting Problem for a Zeno Machine, but not itself and so on. This seems to create classes of computation problems (under the umbrella term "hypercomputation") that can't "understand" itself or anything higher, but can "understand" problem classes below it (the use of the term "understand" is even more applicable when one considers Rice's Theorem about Turing Machines being to determine some property of other Turing Machines). I don't know where the boundaries of the classes are, but the Zeno extension seems to create some pretty intuitive examples.
Of course, this has nothing to do with consciousness, but it just reminded me of it, as it seemed to be such a perfect analogy! So I hope pseudo-scientists don't take this the wrong way and get a way overblown idea of how important this "connection" between computers and conciousness is. , anyone?
Subscribe to:
Posts (Atom)