Social Sites
Syndication
Navigation
Saturday
Feb152014

My case for BTRFS over ZFS

The computer industry always finds itself mired in FUD and heated debate over what can only be described as religion. The ZFS and BTRFS camps are clearly no different, so I figured I'd give some actual real world experience from the perspective of a Systems Engineer that has used both with great success.

 

First, let's address this site point-by-point. First off, my impression is that this comes from a person with very little understanding of BTRFS or the fact that it is a filesystem under active development whereas ZFS is stable and mature. That should be anybody's first bullet point, BTW. ZFS has been deployed in production environments for a long while with great success, while BTRFS is only now gaining real traction.

 

ZFS organizes file systems as a flexible tree

The complaint here seems to be the a snapshot of a subvolume is logically "located" beneath that volume in the filesystem tree. This is probably a consequence of design (b-tree and all) than anything else. But, since a subvolume is still a volume, the administrator could mount it anywhere they like. Including in a mountpoint that is not beneath the root node of BTRFS.

 

File system operations in ZFS can apply recursively

This is an intentional design decision and calling it out as a short-coming of BTRFS actually shows a lack of understanding of the filesystem's design principles and system administration in general. Typically, you want a command to operate on as small a set of data as possible, and the administrator handles the recursion with something like find or for. A matter of opinion, perhaps, but historical convention argues that ZFS does it wrong in this case.

The actual design here is that BTRFS's subvolumes are actual b-trees on their own. So, an operation that accesses root/vol1 would need to access an entirely different node to apply to root/vol2. When using BTRFS, it is best to think of subvols as actual seperate filesystems. They are stored in the same pool, but they are logically seperated from other subvolumes.

 

Policy set on ZFS file systems is inherited by their children

Again, because of the fact that subvolumes are trees unto themselves, this stands to reason. It's a consequence of the design. However, it doesn't mean it's not possible. For instance, options such as compression are applied to subvolumes when the parent volume is mounted.

Of course, with an IOCTL these options can be enabled or disabled on a per-item basis, so it really isn't a point worth considering.

 

ZFS auto-mounts file systems by default

This is actually a lack of understanding by the author. There is a conf file that informs ZFS of the ZFS filesystems that exist, and that they should be mounted when the first ZFS filesystem is mounted. To be clear, ZFS has actually replaced the behavior or /etc/fstab with its own config file. Even worse, the administrator can still use the fstab config file! This is fundamentally broken, IMO.

BTRFS, and all other filesystems with reasonable behavior, require an administrator to mount the filesystem. Of course, with an automount daemon, I'm sure it would be extremely trivial for any administrator to replicate this behavior.

Note, however, if a child subvol is created within a BTRFS subvol it will appear in the mounted filesystem. In other words, if I have a btrfs volume "/mnt/btrfs" and I create subvol "/mnt/btrfs/sub", when I mount /mnt/btrfs the subvolume /mnt/btrfs/sub appears in the tree.

The concern about changing mountpoints of subvols is only half true, since a subvol can be mounted to any location by passing the subvol= option to mount.

 

ZFS tracks used space per file system

This is actually quite confusing at first. ZFS shows you quite clearly how much space us being consumed by each ZFS mounted. BTRFS shows how much space is consumed in the entire pool. The du command works as expected, whereas df shows the raw free space. This can make life difficult for the administrator, because they must understand the semantics of their data protection strategy to calculate how much space can actually be allocated. There is a patch for this, but it's a serious pain for newcomers to BTRFS. This is a clear usability win for ZFS.

Because each item in BTRFS can have a different set of options applied to it, though, it does start to make a small bit of sense. Any file can (in the future) have an IOCTL call set its data protection and compression options, so predicting the amount of free space that is actually available for allocation given that detail would be very difficult.

This is also an interesting design question with room for discussion. Since storage in ZFS and BTRFS is actually a pool, should the administrator see the pool's view of storage, or the individual volume's view by default? Clearly, there should be an option to view each, but what should the default view be? I believe ZFS does it properly by showing the volume's values.

 

ZFS distinguishes snapshots from file systems

This seems to be an issue with the author's experience with BTRFS. A snapshot does not have to be a peer with its original. Snapshots, like any subvolume, can be given a destination. Personally, I like to create a subvolume that will act as a target for snapshots and create snapshots with a destination of that particular volume. In other words, I will have subvolume "/mnt/btrfs/snapshots". I will then create a snashot of "/mnt/btrfs/somevol" to "/mnt/btrfs/snapshots/somevol". In reality, I like to prepend or append the Unix timestamp of when the snapshot was taken as well, but the point is snapshots can be created in some other volume if the administrator chooses.

 

ZFS lets you specify compression and other properties per file system subtree

BTRFS allows these options to be specified on a per-file basis.

 

ZFS is more stable

This is true, as long as your definition of stable relates to deployment in production systems and code age. Since my definition of stable is exactly that, I agree. BTRFS is much newer, and because of this and the existence of the time continuum, ZFS is more mature. However, we should all agree the sometimes new products get to benefit from the discoveries made over time. New is often better than old. This is certainly true in computers.

Given the same time that ZFS was granted, I have absolutely no doubt that BTRFS will be as stable and mature if not more. Because BTRFS was developed as an open source project from day one, it has the anecdotal advantage of more contributors and testers than Sun allowed when creating ZFS. I say anecdotal because there's no hard data that I would be able to dig up to prove this one way or the other. Sun created ZFS behind closed doors, so its extremely early history is pretty much lost.

 

ZFS has RAIDZ

Nobody should be using RAID-Z. Period. As a user of ZFS, this was the very first item drilled into my head. Using RAID-Z is effectively equivalent to using RAID-5. RAID-Z2 is the rough equivalent of RAID-6, and there is even RAID-Z3. However, every single date reliability study ever done has noted that using duplicates provides far superior protection. Since BTRFS and ZFS advertise their main selling points as data reliability, why on earth would you ever choose the cheap-and-dirty way out?!

Even more important is performance. Using RAID-5 or RAID-6 is far slower than using RAID-10. This is true when using RAID-Z instead of a RAID-10 style protection scheme in ZFS, or when using RAID-10 protection instead of RAID-5 protection in BTRFS. By the way, RAID-5 and RAID-6 are available in BTRFS. Their existence was intended from the early development days, but it simply wasn't a real development priority for BTRFS since most consumers are using RAID-10 anyway. 

PLEASE don't use RAID-Z or RAID-5 for data protection.

 

ZFS has send and receive

BTRFS has had send and receive for quite some time at this point. It was one of the first "extra" features added, and it ceratinly existed before BTRFS was considered a stable format.

As far as ease of use, "btrfs send" is pretty equivalent to "zfs send".

 

ZFS is better documented

This is very subjective. Much of the great ZFS documentation has almost entirely disappeared after the oracle acquisition. Also, much of it is very very old. In contrast, the "btrfs" command is very well documented much like the "zfs" and "zpool" commands are with man pages and fantastic "--help" results.

On the other hand, technical documentation for BTRFS abounds. The Wikipedia page details its design very well, and the BTRFS page at kernel.org is a wonderful central repository of information for users and administrators alike. Again, this is subjective because as a systems engineer I am interested in details that most users and some administrators are not interested in. With regard to the information easily available to an administrator, I find no appreciable difference.

 

ZFS uses atomic writes and barriers

It is important to understand that the author is incorrect about barriers and atomic filesystem transactions, but correct that ZFS uses atomic writes and write barriers. Barriers do not mean data will not be lost or that I/O transactions are atomic. Instead, a barrier is a way to ensure, as the author noted, the order of writes reaching durable media. By ensuring the order of certain writes, you can provide some guarantees about data durability.

However, the author makes a common incorrect claim: "you will never lose a single byte of anything committed to the disk". This is absolutely incorrect. Instead, the claims made by journaled, barrier-write enabled filesystems is that you will not lose anything before the previously committed barrier-written transaction. If you yank power to a host that is writing data to durable media, you absolutely will lose data. The guarantee here is that you will only lose certain data, and should never be left in a position where a filesystem is corrupted beyond repair.

Understanding these facts is critical to a system administrator, and I have seen this incorrect statement far too often.

With regard to BTRFS, it also uses write barriers to make guarantees about its data and metadata. I'm not sure why the author was convinced that it didn't, but that's is the power of FUD I suppose.

 

ZFS will actually tell you what went bad in no uncertain terms, and help you fix it

It seems the author had no idea that "btrfs device stats" existed. The output of that command, when pointed to a btrfs mountpoint, is a list of devices that compose the pool and counters for several kinds of errors and corruption types.

Furthermore, "btrfs scrub status" will give you the results of the last scrub operation. This is very similar to ZFS, so I'm not sure why the confusion. zpool does give some interesting stats about the pool being interogated, but I personally don't need I/O stats re-implemented in a filesystem-specific way. On Linux, I have /proc and /sys. Those two tools alone replace much of what a system engineer needs from the zfs and zpool commands, so this is a wash to me or a win for BTRFS.

 

ZFS increases random read performance with advanced memory and disk caches

No, it does not. I've used ZFS in several scenarios that required differing I/O patterns to behave differently. I've used ARC, L2ARC on SSD. What I can say, without question is the following:

ARC is a terrible replacement for the VFS caching layer in Linux, because VFS has no idea what ARC is. More importantly, RAM allocated to ARC is memory that is considered actively used whereas almost all other filesystem caches will purge less recently used data when the operating system requires it for some other operation. This is a good design, because I'd rather the OOM killer not come by and kill a critical task when I have several gigabytes of data in RAM that could be re-read from disk. It's pretty rare that a NAS/SAN would require a massive, active working set in RAM, so I find the VFS caching layer to be adequate as it is now.

L2ARC is, to me, the recognition that adding faster disks (SSD) is easier than adding RAM to a host. L2ARC caches more frequently used blocks to a fast bit of storage so that when the data is requested again and they aren't cached in ARC, they will be read from L2ARC faster than the backing spindles could produce them. I've had very good experience with FusionIO cards used as L2ARC. Linux has an answer for this in recent kernels, spurred by the request that a caching block device be made available by the BTRFS developers. The benefit on Linux is that instead of this being a filesystem specific addition, the OS itself will add this feature and all filesystems will benefit!

In practice I have found that L2ARC only helps a bit, because random access typically happens...wait for it...randomly! Unless it happens randomly against the same blocks, caching the data is a useless exercise that adds more I/O to the underlying pool of spinners. I can only imagine this kind of thing is added when a developer realizes their filesystem offers terrible performance for database loads, and they incorrectly presume they can solve the problem by adding an additional caching layer.

 

ZFS increases random and synchronous write performance with log devices

No. It does not. The ZIL offers very little in the way of performance enhancements. It is only applicable to sync writes, which are uncommon in the real world of NAS/SAN servers. Many people presume the ZIL automagically combines random writes into larger sequential writes. Again, this is only true when the writes are synchronous and happen within the flush barrier period (default of 10 seconds).

There is an answer to this in Linux- The block I/O scheduler does some basic write combining before flushing I/O to the block devices below. Again, the Linux aproach is different in that if an idea is good, it should be applied to the system as a whole. ZFS's aproach comes from the fact that it was bolted onto Solaris and had to solve many of these problems on its own.

In addition to write combining in Linux's block layer, the previously mentioned block caching layer added to the kernel allows for a write-back or write-through cache. This cache is fundamentally different to the ZIL because it actually caches the block data that is destined for the filesystem. This data can be kept in cache for instances where recently written data is re-read. This effectively optimizes out the behavior, which can lead to actual performance increases for things like NAS/SAN storage.

 

ZFS supports thin-provisioned virtual block devices

This is true- BTRFS does not expose extents of storage as block devices. I find this to be a pretty significant feature for ZFS when used to build a SAN. An administrator could easily create an MD raid device, place LVM on top of it, and expose the LVM logical volume to a client. This is a pretty significant work around, though, and offers no protection from "bit rot". 

 

ZFS helps you share

 Yes, ZFS will inform the NFS or Samba daemon of the new export. I find this to be an unnecessary "optimization" and a pollution of the zfs command. With things like AppArmor profiles, it's also unlikely that the zfs or btrfs command would be allowed to modify some other configuration file in /etc by default for security reasons.

At the end of the day, the NFS and CIFS daemons on a server have nothing to do with the filesystem below, so I find no compelling reason to teach those tools how to speak to a file server. This is such a shallow "win" that it seems like a real stretch.

 

ZFS can save you terabytes by deduplicating your data

Well, it could save you that much if you had several terabytes of the same exact data. Unfortunately, ZFS's deduplication performance is abhorrent and uses so much RAM that it's foolish. To actually have 1TB of de-duplicated data would require between 2.5 and 640 GIGABYTES of RAM for the tables in ZFS. This is memory that is not available to your OS or to the filesystem for caching. It is simply a lookup table!

Worse yet, the CPU requirements for ZFS' online (real-time) deduplication is considerable. Many people building a NAS or SAN device purchase lower-end CPU because the only thing it will be used for is checksum calculation and compression. Adding the complexity of the de-duplication lookups adds real CPU requirements to the host's cost and tends to add quite a bit of latency in real world applications I've had.

This is not just a ZFS issue, though. I've also used online and offline deduplication on WAFL (NetApp) filesystems and on Windows Server hosts. Any time you enable deduplication, write speed decreases significantly.

BTRFS does offer deduplication, though. It has offered one type for quite some time- creating copies of files that are reflinks to the original creates a Copy-on-Write file whose only consumed blocks are those that are changed from the original. Additionally, BTRFS offers an off-line deduplication system, which will deduplicate files when the command is executed. This offers normal write speeds, while still allowing frequently duplicated data to be minimized on a schedule. I have attempted this with NAS/SAN appliances that served as a store for backups and virtual machines, and I can say it's usually not a feature anybody would want to use.

In the case of a backup storage system, deduplication is often performed by executing incremental backups. Saving network bandwidth and storage at the same time. This has a much higher dividend than just de-duplicating the storage system! In cases where virtual machine images are clones of a "golden master", I find cp --reflink to be more than adequate. This deduplicates the common operating system storage among the images and doesn't require any processing overhead or maintenance job.

 

 

So, in conclusion, this article was mostly fud or ignorance. I suspect it wasn't intentional, most people find themselves defending a particular position vehimently without realizing it and often for no other reason than it was the choice they had made.

BTRFS is a newer filesystem than ZFS with a significantly different design and some common features. Comparing filesystems is a natural consequence of having so many choices, but I think it's important to point out the fact that ZFS was designed to fix some of the shortcomings found in Solaris at the time, whereas BTRFS's developers haven't had to face many of those challenges.

I have no doubt that BTRFS will mature quickly over the next year or two, and will provide Linux with a first class checksum enabled, CoW filesystem that can be used in production without question. In my next entry, I will try to compare ZFS and BTRFS's designs and implementation details. Hopefully this will show where the two have common ground, explain some of the user-land differences, and hilight where each could borrow from the other to improve.

Sunday
Nov042012

Fully enabling discard

To those of you using a Linux kernel greater than 2.6.33 and an SSD, you should be enabling discard support. Enabling discard will tell the underlying device to enable TRIM, UNMAP, or other physical block erase command. In general, using discard on your SSD will cause empty cells to be zeroed out. This usually happens in the background so they are clear when the cell is re-written at a later date. This saves you several operations at write time, which equates to somewhere around a 3x write speedup on an aged SSD.

 

Filesystem Discard

If you're using the EXT4 filesystem, simply add 'discard' to your mount options in /etc/fstab.

/dev/mapper/ssd-root / ext4 discard,errors=remount-ro 0 1

When the device is mounted again, discards will be enabled. If you don't want to reboot, you can always make the necessary changes and then execute:

$ mount -o remount

 

The discard option will only work with the ext4 filesystem and not ext2 or ext3. If you add the discard option to an ext2/3 fstab entry, the filesystem will be mounted read-only which will prevent proper booting in some cases!

An alternative to enabling TRIM on ext4 is to run BTRFS, which also replaces LVM or other volume managers. Discard is enabled by default in most cases when mounting a BTRFS volume.

 

LVM TRIM

As of kernel version 2.6.38, LVM2 supports enabling discard. Enabling it will cause LVM to issue a discard command when a logical volume is no longer using a physical volume's space. Documentation isn't very explicit about this support regarding filesystems releasing blocks, but at the very least when an lvreduce or lvremove is executed, the physical device will be sent the appropriate discard command.

To enable discard support in LVM2, simply change the issue_discards value in the /etc/lvm/lvm.conf file.

devices {

    ...

    issue_discards = 1

}

 

 

DM-Crypt Discard

If you are using Linux kernel version 3.1 or later, you may enable discard for encrypted block devices. Cryptsetup allows you to specify a discard option on the command line when creating an encrypted device, but you may also enable it by adding the 'discard' option to /etc/crypttab entries.

sda3_crypt UUID=a73246e5-1337-dead-beef-abcdef543210 none luks,discard

It has been mentioned that using the discard command with an SSD may lead to data vulnerability. The theory is that if discarded data can be recovered from the physical device, information about device free and used space, device size, and filesystem type could be exposed. For the most part, on an SSD, a discard command becomes a TRIM, which actually erases the storage cell. On a thinly provisioned device, such as a SAN, these blocks may not actually be zeroed. I think it's unlikely that exposing information about the kind of filesystem would risk actually encrypted data.

Thursday
Apr122012

Network Monitoring Nightmares

Network Monitoring Systems (NMS) are often cumbersome, ugly, hard to maintain, painful to install, and tedious to configure. In many cases, more energy is spent working around problems with the NMS than actually monitoring the network it's deployed on! And if you think this is a problem that can be solved by purchasing a commercial product, you're in for a real shock. So what do we do about the state of network monitoring?

 

Background 

First, some background. Not background of network monitoring software- there's far too many of them out there. Instead, this is  background on the problem faced by a typical IT or Operations department. First, there is a network. The network initially has only a few servers used by a few people, and everything hums along. When there's a problem, the person next to you notices and asks you to look into it. Because there are so few devices making this network run, it's easy to find where the problem is and typically easy to fix. But then the network grows. You need more storage, more servers, more switches. Next you get into complex networks designed for maximum stability, redundant connections, high-availability products, load balancers, routers, firewalls, and so on. Before you know it your tiny network has grown into a multi-datacenter goliath and you're spending most of your time trying to put out fires and find faults in your design. And like everything in life, the more complex you make things the harder they are to unravel when you need to find the source of an error.

Now you need a network monitoring system. Some piece of software that can watch each individual component on your networks to make sure they're operating properly. This system also needs to watch services running on servers to make sure they don't fail or return unexpected results. And of course, there are countless scripts and applications running in the background to make sure the plates keep spinning. The picture is simply too large for one person to watch manually, and you'll never catch problems before users are impacted. No matter how good you think you are.

 

Step 1. - We can do this ourselves!

The first step most administrators take is to write a collection of checks themselves. Most of the time they pick a language they know already and get to working. First they check they can get to the web servers. Childs play for all but the newest admin. Open a connection, retrieve a page, make sure the page downloaded, and we pass. Otherwise we fail. Chalk up a success for the admin team!

But then the web sites change, the file moves, and the test fails even though the server is up and working fine. So the admin checks to make sure any page is returned. Eventually, the site changes again and the page that's being successfully returned is actually an error! Now we think the site is working fine, but it turns out customers have been reporting errors to the support line for the past three hours! This game of cat-and-mouse goes on and on, and the collection of scripts gets larger and larger until finally the administrators decide it's becoming too complex to manage the network monitoring system themselves. Now they start step two- searching for a monitoring product.

 

Step 2. - Just pick a monitoring system!

Now that the administrators (there are a team of them now) have admitted that their time would be best spent on administration of their company's infrastructure and not on writing a monitoring system, the search for an NMS begins. Because the team is used to doing things manually or modifying the monitoring system they've cobbled together every time there's a change, they will almost certainly pick a product that offers a small feature set and requires lots of manual intervention. This isn't because they want a system that's difficult to use, but rahter it's what they're accustomed to and they don't know any better yet. So they suffer through the initial setup and maintenance.

Eventually, someone will notice a system that offers more features and more automatic functionality exists and it would make the job of monitoring much easier for everyone. Unfortunately, so much effort has been invested in the current solution, and it has collected so much historical data that it's deemed too difficult to switch NMS products. This typically happens several times, and every time the story is the same. But inevitably something so drastic happens- either a failure in the monitoring system, a loss of data, or a lack of expandability- that the team agrees the time has come to once again change monitoring systems.

 

Step 3. - Maybe we should pay for this?

Once it has been deemed that the monitoring system is critical to the business, a project can be created and actually assigned money. This can go one of two ways depending on what the administration group looks like- completely commercial or mostly commercial.

A completely commercial system would be an HP OpenView or an IBM Tivoli system. These are large packages that have tons and tons of functionality, professional development, lovely graphical views, fantastic charts and graphs, prediction models, event correlation engines, inventory modules, expensive support contracts, and serious system requirements. It's typically not enough to just buy the monitoring system- you need database software to manage all of the information generated, too! And that can add thousands to the price tag. But, as long as there is money in the budget, and the sales people do their jobs, these solutions seem like they have endless capabilities and it's a no-brainer to go with a completely commercial offering. But the price tag is often so high that the sticker shock is insurmountable. Even worse, if you do end up with a completely commercial offering, you quickly find out that you're essentially on your own to write the components that check your environment again! Now you're back to step one, but you're tens or hundreds of thousands of dollars poorer. And you've learned a very valuable and very expensive lesson.

By contrast, a mostly commercial system is typically a product that has a free or free/open source component that is expanded on by the commercial branch. These companies lure you in by offering extra value, product support, training courses, development resources, plugin packs, and things of that nature. Most of the mostly commercial offerings compare themselves directly to the completely commercial products, and sometimes they even offer truly better products. Sometimes they can be rougher around the edges than the highly polished products offered by HP and IBM, but for the most part they offer exactly the same functionality at a reduced cost. But like everything, the buyer needs to beware. The development teams at these companies are usually pretty busy building new features and fixing bugs. Getting new features developed for your organization can be difficult or take a very long time, and this can be a hard lesson to learn when you've spent tens of thousands of dollars on a product that you're beginning to realize does barely more than the product you've just replaced. What's worse is that you're beginning to realize that the features in the commercial offering are barely worth purchasing, and you probably could have used the free offering instead.

Wonderful.

 

Step 4. - Just settle.

So here we are. Your team has spent several man years trying to solve a problem that the success of your business has created. If you've made it to step three, you probably realized the network monitoring system space is a wasteland of half-baked, half functional solutions. There are countless solutions to choose from, but you now realize all of them have the same short-comings and none of them address the one killer feature you need.

What's worse is that if you've purchased a solution, you no doubt realize that the cost of developing extensions or plugins for the monitoring system is so high that it's effectively out of your reach. So, once again, instead of renewing your support contracts or paying a programmer to write the same things you wrote what feels like forever ago, you decide it's time to look at the market again.

I've been living this nightmare for the past decade. I've used every piece of monitoring software you can imagine, and I've used many of them more than once. Little has changed in the past 10 years, which is amazing! There are few software industries that have the same approach and concepts that they had 10 years ago, but somehow monitoring just seems to become stagnant the instant it's released. They all look the same, they feel the same, and for the most part they all have the same failures and successes. What's worse is that severan newcomers base their products on offerings that haven't seen development in so long that the projects have effectively been abandoned!

So what do you do now? Where do you go from here? Well, if you're like the majority of administration groups, you settle. You'll find a package that does most of what you want, you'll find someone skilled enough to add the functionality you need, and you'll be frustrated every single day you use it. That's the sad state of monitoring, and we're all in the same boat.

 

Conclusion 

I'd personally like to call on Google, Yahoo, Akamai, Facebook, and other massive networks to tell the rest of us what they use. They must be going through the same pains as startups, and with the skilled people they have onboard they must have found a solution. So what is it? What's their silver bullet? Are they willing to release the code for their tools, to host talks about monitoring, to teach us their ways? I certainly hope so, because almost every group of admins could benefit from their knowledge.

Wednesday
Jan182012

SOPA is downright dirty

Lots of you have at least heard someone say the acronym "SOPA". If you've ever wondered what it is, this bit's for you. If you're not interested in the least about American law or your legal rights being eroded by corporations, don't bother with this post.

 

Most of you know that I'm not a conspiracy theorist, and that I'm not against large corporations. In America, businesses aspire to become large and successful. And that's perfectly acceptable to me. But, the capitalist free-market system breaks down completely when you give one group preferential treatment over another. And, like most forms of government, it's easiest to influence officials with money. Lots of money.

 

So, what does this have to do with SOPA? And what is SOPA in the first place. S.O.P.A. is the acronym for the "Stop Online Piracy Act"- A bill that is currently being proposed by the US House of Representatives (Here). In effect, SOPA boils down to a few basic ideas:

 

  • Piracy is running rampant on the Internet
  • More piracy means less revenue
  • Less revenue means fewer jobs
  • Industry needs tools to stop piracy
  • Law enforcement needs more swift authority to stop piracy
  • Law enforcement needs to be able to seize sites, servers, and domains suspected of piracy

 

 

On the surface, you'd think this is something worth supporting. It's a bit of hyperbole to say that piracy supports terrorists or that piracy is taking down the entertainment industry, as their TV and radio advertisements would have you believe, but we get the point. Lots of people are stealing lots of content, and we need a way to stop that.

 

But as you begin to really think it over, you quickly realize law enforcement agencies already have the authority to seize property used to commit a crime, and property obtained through criminal acts. So, there's no need for new legislation granting them the same tools they already have. But wait! SOPA doesn't want to take down sites whose owners have been convicted. They want to take down sites that are suspected. And this suspicion doesn't have to come from a lengthy investigation, like it would if the police suspected you of a crime. Instead, all a copyright holder would have to do is believe content has been pirated. So, if you post a video to Youtube and it has a clip from a popular NBC show, they will flag it to be removed. Under SOPA, this literally means Youtube.com could be seized from Google! Even though your clip should be protected under fair use. Even worse, this kind of action has already been used to censor videos, text, and pictures by several different groups. All these groups would have to do is file a complaint that your site is pirating material, and down goes your site!

 

This might seem far fetched, but as I alluded to, most of the provisions requested in SOPA already exist in one bill or another. A perfect example of what can happen is the case of Dajaz1.com. A site that legally posted music for viewers to listen to. It was swept up by the Department of Justice with 350 other sites at a single time. The domain was taken from the owner! This is tantamount to the police taking your car without actual evidence you've done anything wrong! And in the end (a year later), the DoJ determined that Dajaz1.com was, in fact, operating legally and the domain was returned. Meanwhile, the site's owner was out of business for an entire year with literally no legal avenue for recourse. To read a long report about what happened, check out the NYTimes piece here.

 

I'm not the only one that believes this will become the rule and not the exception if SOPA were allowed to pass. In fact, today (Jan. 18, 2012) is SOPA Blackout day. See the list of sites, action items, protest sites, etc. at http://sopastrike.com/. The list of sites is pretty impressive, and the number of companies putting their money where their mouth is impresses me.

 

So, to learn the facts about SOPA, I'd urge you to check out the Wikipedia article here, the Google information here, and of course to SOPAStrike site here. You can find Google's search results for SOPA here, which is currently blowing up with information from news sources around the world. I also think you should read some pro-SOPA info including the GovTrack page here, the MPAA letter to the House of Representatives here (PDF File), and finally an article found on the eMediaLaw site here.

 

We, as Americans need to defend our intelectual property rights, but we need to do it in a way that doesn't come at such a severe cost the the citizens. After all, while a corporation may be recognized as a legal entity with all the rights of a person, they still can't vote. Please use the Wikipedia page here to contact your representatives and tell them what you think of SOPA- for or against.

Tuesday
Nov082011

A personal rant

I usually try not to get too into these kinds of things, because I have many friends and family with many different beliefs and I do my best not to offend them. But every once in a while, a little gem like this study comes along and deserves a little bit of a "DUH!" reaction.

 

So, The Barna Group, a group that studies the "intersection of faith and culture", published an article titled "Six Reasons Young Christians Leave Church". The article comes from a five year project with the goal of creating a more durable faith in today's children. While I can give one great reason I don't go to church, I suppose I could easily come up with six or so that lead me to Atheism. Many of those reasons are actually covered in this survey, which I find humorous itself. Common sense issues that have lead the "flock" to stray from the church that have been issues for 20+ years now. Way to get out ahead of it, guys!

 

Anyway, here are the six categories and some breakdowns of how the group(s) surveyed answered.

 

Reason #1 – Churches seem overprotective.

  • 23% answered “Christians demonize everything outside of the church” completely or mostly describes their experience with the church.
  • 22% answered “church ignoring the problems of the real world”
  • 18% said “my church is too concerned that movies, music, and video games are harmful”

I'm pretty sure nobody in the real world is shocked by these clusters of answers, and there really isn't much to say about this. Religion tells us what we should and shouldn't do in all aspects of life, and kids tend to get annoyed by that kind of overbearing protectiveness. I personally find myself in the 22% that says church completely ignores the problems in the real world, but the problems of the world I see are very different than the ones I saw as a child.

 

Reason #2 – Teens’ and twentysomethings’ experience of Christianity is shallow.

  • 31% responded "church is boring"
  • 24% said "faith is not relevant to my career or interests
  • 23% noted "the bible is not taught clearly or often enough"
  • 20% said "god seems missing from my experience of church"

Shocking revelations in this category, I know. People are bored by listening to a preacher carry on for an hour, telling them what to do instead of an hour of discussing what's going on in the world and what we think about it. Hasn't anybody ever noticed that when you TELL people how to behave, they ignore you completely. But, if you let them think they've come to the answer themselves, they tend to feel more personally connected to it? That was rhetorical...the answer is "no" when it comes to church.

Also, I think the bible should be taught clearly in church or "bible study". I think more people would find that they completely disagree with the vast majority of the things in the bible, not least of which is how to properly keep slaves, how to divide the spoils of war (how to rape your victims' women), how to blindly follow the voices in your head and kill your children, and other such timeless lessons. So, I agree with these kids, but not for the reason the survey would like.

With regard to god being missing from the church experience, that's not really shocking either. Again, it's hard to feel the presence of any higher power at 9am on a Sunday with a priest or preacher re-enacting rituals from the dark ages. Many churches try to dress up what's going on by having christian rock music, energetic sermons, and free donuts and coffee after service, but the truth of the matter is the only thing I miss are the donuts.

 

Reason #3 – Churches come across as antagonistic to science.

  • 35% claim "Christians are too confident they know all the answers"
  • 29% say "churches are out of step with the scientific world we live in"
  • 25% believe "Christianity is anti-science"
  • 23% have "been turned off by the creation-versus-evolution debate."

No real surprises here. Though, I think it would shock most catholics to find that the Pope's science advisors (the guys in the cool black and red robes) actually believe in evolution. In fact, they find it very difficult to argue against evolution given all the things we've learned about "junk" DNA (parts of DNA from other species we still have in our own), the germ theory, nuclear science, astronomy, etc. It may also shock people to know that these same scientists, employed by The Vatican, also believe in the "big bang" theory and global warming!

So, it seems the church isn't always anti-science. But its members sure are. In ever increasing numbers, ignorant people go on TV and write editorials and articles in news papers and magazines, trumpeting the evils of modern science. How science is here to steal your childrens' innocence with HPV vaccines, and teach them all kinds of immoral things about sex and sexuality with the "gay agenda" and free condoms everywhere. But I can promise you, if you sit down and read just one single article about any of these topics in any scientific publication (I like Nature, Nat. Geo., the The New England Journal of Medicine), you'll actually find that research is being done to understand and improve life by leaps and bounds. I just find it ironic that the very people science is helping today with blood pressure medication, cancer treatments, and vaccines against once deadly diseases are the same people crying for the abolishment of scientific study.

 

Reason #4 – Young Christians’ church experiences related to sexuality are often simplistic, judgmental.

  • 17% feel they “have made mistakes and feel judged in church because of them.”
  • 40% said the church's “teachings on sexuality and birth control are out of date.”

Well, now you've gone and done it. You've stepped in a hornet's nest with this question, Barna Group. The church's teachings on sexuality, and the opinion they disseminate is so backwards and out of date that almost all psychological associations warn that they can and will lead to self-destructive behavior, not the least of which is suicide. And if you think gay people are the biggest victims here, you're wrong.

While the church speaks out emphatically against gay individuals in every forum it can, it also shames young people into thinking they've done something so morally reprehensible when they've had sex out of marriage, that they need to feel guilty and ashamed. And this doesn't just mean intercourse, either! Nope, you can't masturbate either. Because that's a sin so great, it was actually called out in the bible itself.

What the church fails to recognize in these modern times of understanding, is that sexual impulses are a sign of a healthy and fully functional person of sexual maturity. We as a society tend to put limits on what we consider "normal" sexual behavior, which we are beginning to understand more and more that "normal" is completely meaningless. But the church remains steadfast in its position. You can not have sex, you can not protect yourself from STD's while having sex (unless you're in the worst effected regions of Africa), and you absolutely can not use birth control to prevent unwanted pregnancies.

But there's good news in this answer. 40% of young adults surveyed agree that the church is full of it, and doesn't understand sexuality in a modern world. So, there's hope for the future yet.

 

Reason #5 – They wrestle with the exclusive nature of Christianity.

  • 29% said “churches are afraid of the beliefs of other faiths”
  • 22% answered “church is like a country club, only for insiders”

Well, be thankful you live in the 21st century. Because not very long ago, President Kennedy had to go on national television to answer allegations that him being president was part of a catholic conspiracy. Yep. Distrust, misunderstanding, and out-right hatred for other religions or beliefs run rampant in nearly all religions. And for good reason! It's right there in almost every holy scripture.

In fact, being "wrong" is so heinous in the church's eyes, that they fully believe a non-believer (or wrong believer) will burn in torment for all eternity. Yeah, that's right. They believe the divine creator of the universe spend the past 14.7 BILLION years plotting to torture you for eternity. But remember, the church teaches you that even though you're going to burn for eternity, god loves us all.

 

Reason #6 – The church feels unfriendly to those who doubt.

  • 36% Not being able “to ask my most pressing life questions in church”
  • 23% having “significant intellectual doubts about my faith”
  • 18% answered their faith "does not help with depression or other emotional problems”

Look, if you're having problems with depression or other emotional problems, you need friends and maybe a psychologist. You don't need a man in a magical robe with a hotline to god. I promise. You need to surround yourself with people that love and support you, even if you don't believe in the same ultimate answer you do.

And it's good to have intellectual doubts about faith. Faith, by definition, is believing in something that there's no proof for! I have intellectual doubs about whether I put on deodorant in the morning or not, so doubting a belief system invented in the dark ages to explain where we came from and give meaning to life? Yeah, go ahead and doubt that!

Finally, I once made the mistake of asking "hard" questions about church and the religion I was brought up with. Over the years, the questions got harder and the answers only got simpler. When you ask a priest or preacher why millions of kids die of hunger and disease every day, or why babies are born with defects, or why god only delivers miracles for a tiny percentage of the population that so desperately need them, they will eventually come to the conclusion that "We don't know what god's ultimate plan is". Well, I'm not satisfied with that answer, and clearly neither are at least a quarter of the people going to church every week.

 

So, the question becomes this. Does the church double-down, tighten the reigns, and adapt to the changing world? Or does it enforce the existing rules with an iron fist? Does church become a recreational and mostly social activity in the future, or does the church keep fighting the tide of people waking up to the real world around them? I'm not sure, but I don't think the prospects are very good for the church. Every day people realize that the bible is wrong in so many aspects of life, and they begin asking why bother if the very book their religion is based on is wrong. And that makes me happy.

 

Bonus: If anyone reads this and would like to read more factual evidence for evolution, and many other foolish misconceptions people still have about the real world, check out http://www.talkorigins.org/origins/faqs-qa.html