加入一行:define('WP_ALLOW_REPAIR', true);





Xcode attempted to locate or generate matching signing assets and failed to do so because of the following issues.

Missing iOS Distribution signing identity for … Xcode can request one for you.



双击后自动导入到Keychain中,然后再把过期的Apple Worldwide Developer Relations Certification Authority证书删掉就ok了

四月 5th, 2016umount强制卸载


umount /mnt/udisk

umount: /mnt/udisk: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))


umount -f /mnt/udisk


fuser -km /mnt/udisk





一直提示“windows installer遇到问题”


其中最主要的就是apple update这玩意


三月 2nd, 2016华三ER系列路由器VPN

计划在公司的H3C ER8300G2-X路由器上启用vpn服务





















At Substantial, between the tides of client work, we interleave work on internal projects to explore product ideas and experiment with technology.

Recent experimentation by the San Francisco team for Substantial Dashincluded stepping outside our comfort zone for application deployment. We normally use Chef to provision servers and Capistrano to deploy application code to those servers.

Typical challenges

  • How do we build a host server and then install the web application?
  • How do we release a new version?
  • Is a version just application code, or does it include dependencies for the host server too?
  • Is server provisioning and app deployment reproducible?

Docker containers continue to emerge as an answer to these questions. In fact, Docker supports many of the best practices defined in The Twelve-Factor App.

Put your app in a box

Containerizing an application's runtime environment is a well-known software design pattern for modular, secure systems. The pattern occurs over multiple layers:

  • packaging the runtime environment (a "version")
  • containing the file system
  • firewalling the network
  • metering utilization of network and processor

Well-known containers (from vendor-specific to vendor-agnostic) include:

Lightweight virtualization with Docker

Docker containers share the kernel of the host, while virtual machines each have their own kernels sharing the processor. VMs are comparatively slow and expensive because they are so low-level. They duplicate the operating system overhead for every running virtual machine, preventing a single process scheduler from managing everything efficiently.

Today, the Docker daemon easily runs on Linux to host containers. The container images are typically based on a full Linux distribution so that all of the standard tools and libraries are available. Tiny containers may be created using minimal, embeddable OSs such as BusyBox.

Images vs Containers

Every Docker "container" is run from an "image."

The image is a snapshot that can be moved around and reused; docker images lists the images available to run with the Docker daemon.

The container is the live runtimedocker ps lists the containers that are currently running.

The Dockerfile

The most visible part of Docker is the Dockerfile. Create it alongside your application code to construct an image of the application's complete environment. Notice in this example Dockerfile how everything the app needs to operate is built up from stock Ubuntu Linux.

Packaging the code

Many web developers are accustomed to deploying to a long-running server using tools like Capistrano. You push new code, restart the web server, and ta-da, a new version is deployed. But what happens if you need to change a system library, like downgrade a programming language version (e.g. Ruby 2.0.0) or upgrade a dependency (e.g. LibXML or ImageMagick)? Somehow you would have to coordinate infrastructure changes with application changes; as applications grow this can become very messy.

Containers solve this conundrum of server operations by defining all dependencies alongside the application code. Thanks to the build caching, changing application code does not necessarily require rebuilding everything in the container. Notice once again in this example Dockerfile that the application code is added late in the build sequence.

Runtime configuration

Configuration that remains constant across environments (dev, test, and production) can be kept with the application code. We include the standard etc/ config files right in the repo, and those files are added to the container with Dockerfile ADD statements.

Configuration that is unique to each environment or secret is passed as environment variables when the docker run command starts the container. An example run command passing environment variables is:

docker run -e BASE_URL="http://awesomeapp.com" -e SECRET_KEY=meowmeow image_tag_or_hash

Configuration of what command to run inside the container is set using the Dockerfile's CMD and/or ENTRYPOINT. If a container needs to run more than one process, then start a process manager such a supervisorto run and manage all the processes.

Hosting containers

Many container hosting options exist today. The offical Docker docs list compatible OSs and vendors. Docker-specific workflows are supported by its creator dotCloudStackDock, and more emerging companies.

Depending upon who you select to host your containers, they provide varying amounts of automation around Docker image and runtime management.

Our project's deployment testbed is the Digital Ocean Docker application. In this case, all we get is a bare-bones Ubuntu Linux host running the Docker daemon. So how can we upload our images to the host?

For the Substantial Dash project, we opted to use images as files. This is the simplest, private approach to experiment with Docker.

Distributing server applications

Installation of most open-source server applications requires a mixture of technical knowledge and pain. Docker is poised to become the de facto standard for self-contained application packages that will run practically anywhere.

An example of packaging modern server software for simplified distribution is Discourse, an open-source discussion forum (used byBoingBoing BBSEmber.js forums, and Mozilla Community.

Discourse's Docker project coordinates both a single-container, all-in-one, simple deployment and a multi-container, service-oriented, high-availability deployment.

The Future

The prospects for web application deployment continue to evolve. Docker currently lies between maturity of its core functionality (stable, secure containers) and emergence of high-level utility (like "drag-and-drop" install and scaling of web services). This experiement is just the beginning.

Dockerfiles provide a simple syntax for building images. The following are a few tips and tricks to help you get the most out of Dockerfiles.

1: Use the cache

Each instruction in a Dockerfile commits the change into a new image which will then be used as the base of the next instruction. If an image exists with the same parent and instruction ( except for ADD ) docker will use the image instead of executing the instruction, i.e. the cache.

In order to effectively utilize the cache you need to keep your Dockerfiles consistent and only add the alterations at the end. All my Dockerfiles start with the same 5 lines.

FROM ubuntu
MAINTAINER Michael Crosby <michael@crosbymichael.com>

RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get upgrade -y

Changing MAINTAINER instruction will force docker to execute the proceeding RUN instructions to update apt instead of hitting the cache.

1. Keep common instructions at the top of the Dockerfile to utilize the cache.

2: Use tags

Unless you are experimenting with docker you should always pass the -t option to docker build so that the resulting image is tagged. A simple human readable tag will help you manage what each image was created for.

docker build -t="crosbymichael/sentry" .

2. Always pass -t to tag the resulting image.

3: EXPOSE-ing ports

Two of the core concepts of docker are repeatability and portability. Images should able to run on any host and as many times as needed. With Dockerfiles you have the ability to map the private and public ports, however, you should never map the public port in a Dockerfile. By mapping to the public port on your host you will only be able to have one instance of your dockerized app running.

# private and public mapping
EXPOSE 80:8080

# private only

If the consumer of the image cares what public port the container maps to they will pass the -p option when running the image, otherwise, docker will automatically assign a port for the container.

3. Never map the public port in a Dockerfile.

4: CMD and ENTRYPOINT syntax

Both CMD and ENTRYPOINT are straight forward but they have a hidden, err, "feature" that can cause issues if you are not aware. Two different syntaxes are supported for these instructions.

CMD /bin/echo
# or
CMD ["/bin/echo"]

This may not look like it would be an issues but the devil in the details will trip you up. If you use the second syntax where the CMD ( or ENTRYPOINT ) is an array, it acts exactly like you would expect. If you use the first syntax without the array, docker pre-pends /bin/sh -c to your command. This has always been in docker as far as I can remember.

Pre-pending /bin/sh -c can cause some unexpected issues and functionality that is not easily understood if you did not know that docker modified your CMD. Therefore, you should always use the array syntax for both instructions because both will be executed exactly how you intended.

4. Always use the array syntax when using CMD and ENTRYPOINT.

5. CMD and ENTRYPOINT better together

In case you don't know ENTRYPOINT makes your dockerized application behave like a binary. You can pass arguments to the ENTRYPOINT during docker run and not worry about it being overwritten ( unlike CMD ). ENTRYPOINT is even better when used with CMD. Let's checkout my Rethinkdb Dockerfile and see how to use this.

# Dockerfile for Rethinkdb 
# http://www.rethinkdb.com/

FROM ubuntu

MAINTAINER Michael Crosby <michael@crosbymichael.com>

RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get upgrade -y

RUN apt-get install -y python-software-properties
RUN add-apt-repository ppa:rethinkdb/ppa
RUN apt-get update
RUN apt-get install -y rethinkdb

# Rethinkdb process
EXPOSE 28015
# Rethinkdb admin console

# Create the /rethinkdb_data dir structure
RUN /usr/bin/rethinkdb create

ENTRYPOINT ["/usr/bin/rethinkdb"]

CMD ["--help"]

This is everything that is required to get Rethinkdb dockerized. We have my standard 5 lines at the top to make sure the base image is updated, ports exposed, etc… With the ENTRYPOINT set, we know that whenever this image is run, all arguments passed during docker run will be arguments to the ENTRYPOINT ( /usr/bin/rethinkdb ).

I also have a default CMD set in the Dockerfile to --help. What this does is incase no arguments are passed during docker run, rethinkdb's default help output will display to the user. This is same functionality that you would expect interacting with the rethinkdb binary.

docker run crosbymichael/rethinkdb


Running 'rethinkdb' will create a new data directory or use an existing one,
  and serve as a RethinkDB cluster node.
File path options:
  -d [ --directory ] path           specify directory to store data and metadata
  --io-threads n                    how many simultaneous I/O operations can happen
                                    at the same time

Machine name options:
  -n [ --machine-name ] arg         the name for this machine (as will appear in
                                    the metadata).  If not specified, it will be
                                    randomly chosen from a short list of names.

Network options:
  --bind {all | addr}               add the address of a local interface to listen
                                    on when accepting connections; loopback
                                    addresses are enabled by default
  --cluster-port port               port for receiving connections from other nodes
  --driver-port port                port for rethinkdb protocol client drivers
  -o [ --port-offset ] offset       all ports used locally will have this value
  -j [ --join ] host:port           host and port of a rethinkdb node to connect to

Now lets run the container with the --bind all argument.

docker run crosbymichael/rethinkdb --bind all


info: Running rethinkdb 1.7.1-0ubuntu1~precise (GCC 4.6.3)...
info: Running on Linux 3.2.0-45-virtual x86_64
info: Loading data from directory /rethinkdb_data
warn: Could not turn off filesystem caching for database file: "/rethinkdb_data/metadata" (Is the file located on a filesystem that doesn't support direct I/O (e.g. some encrypted or journaled file systems)?) This can cause performance problems.
warn: Could not turn off filesystem caching for database file: "/rethinkdb_data/auth_metadata" (Is the file located on a filesystem that doesn't support direct I/O (e.g. some encrypted or journaled file systems)?) This can cause performance problems.
info: Listening for intracluster connections on port 29015
info: Listening for client driver connections on port 28015
info: Listening for administrative HTTP connections on port 8080
info: Listening on addresses:,
info: Server ready
info: Someone asked for the nonwhitelisted file /js/handlebars.runtime-1.0.0.beta.6.js, if this should be accessible add it to the whitelist.

And there it is, a full Rethinkdb instance running with access to the db and admin console by, interacting with the image the same way you interact with the binary. Very powerful and yet extremely simple. I love simple.

5. ENTRYPOINT and CMD are better together.

I hope this post helps you to get started working with Dockerfiles and building images that we all can use and benefit from. Going forward, I believe that Dockerfiles will be a very important part of what makes docker so simple and easy to use whether you are consuming or producing images. I plan to invest much of my time to provide a complete, powerful, yet simple solution to building docker images via the Dockerfile.

Don't be too surprise if you never heard about it as I have seen many web developers missed this crucial point. If you want to have quick figure, this table is from the book PROFESSIONAL Website Performance: OPTIMIZING THE FRONT END AND THE BACK END by Peter Smith


The impact of this limit 

How this limit will affect your web page? The answer is a lot. Unless you let user load a static page without any images, css, javascript at all, other while, all these resources need to queue and compete for the connections available to be downloaded. If you take into account that some of the resources depend on other resource to be loaded first, then it is easy to realize that this limit can greatly affect page load time.

Let analyse further on how browser load a webpage. To illustrate, I used Chrome v34 to load one article of my blog (10 ideas to improve Eclipse IDE usability). I prefer Chrome over Firebug because its Developer Tool has the best visualization of page loading. Here is how it looks like

 I already crop the loading page but you should still see a lot of requests being made. Don't be scared by the complex picture, I just want to emphasize that even a simple webpage need many HTTP requests to load. For this case, I can count of 52 requests, including css, images, javascript, AJAX, html.

If you focus on the right side of the picture, you can notice that Chrome did a decent job of highlighting different kind of resources with different colours and also manage to capture the timeline of requests.

Let see what Chrome told us about this webpage. At first step, Chrome load the main page and spend a very short time parsing it. After reading the main page, Chrome send a total of 8 parallel requests almost at the same times to load images, css and javascript. For now, we know that Chrome v34 can send up to 8 concurrent request to a domain. Still, 8 requests are not enough to load the webpage and you can see that some more requests are being sent after having available connection.

If you still want to dig further, then we can see that there are two javascripts and one AJAX call (the 3 requests at the bottom) are only being sent after one of the javascript is loaded. It can be explained as the execution of javascript trigger some more requests. To simplify the situation, I create this simple flowchart

I tried my best to follow colour convention of Chrome (green for css, purple for images and light blue for AJAX and html). Here is the loading agenda


  • Load landing page html
  • Load resources for landing pages
  • Execute javascript, trigger 2 API calls to load comments and followers.
  • Each comment and follower loaded will trigger avatar loading.
So, in minimum you have 4 phases of loading webpage and each phase depend on the result of earlier phase. However, due to the limit of 8 maximum parallel requests, one phase can be split into 2 or more smaller phases as some requests are waiting for available connection. Imagine what will happen if this webpage is loaded with IE6 (2 parallel connections, or minimum 26 rounds of loading for 52 requests)?

Why browsers have this limit?

You may ask if this limit can have such a great impact to performance, then why don't browser give us a higher limit so that user can enjoy better browsing experience. However, most of the well-known browsers choose not to grant your wish, so that the server will not be overloaded by small amount of browsers and end up classifying user as DDOS attacker.

In the past, the common limit is only 2 connections. This may be sufficient in the beginning day of web pages as most of the contents are delivered in a single page load. However, it soon become the bottleneck when css, javascript getting popular. Because of this, you can notice the trend to increase this limit for modern browsers. Some browsers even allow you to modify this value (Opera) but it is better not to set it too high unless you want to load test the server.

How to handle this limit?

This limit will not cause slowness in your website if you manage your resource well and not hitting the limit. When your page is first loaded, there is a first request which contain html content. When the browser process html content, it spawn more requests to load resource like css, images, js. It also execute javascript and send Ajax requests to server as you instruct it to do.

Fortunately, static resources can be cached and only be downloaded the first time. If it cause slowness, it happen only on first page load and is still tolerable. It is not rare that user will see a page frame loaded first and some pictures slowly appear later later. If you feel that your resources is too fragmented and consume too many requests, there are some tools available that compress and let browser load all resources in single request (UglifyJS, Rhino, YUI Compressor, …)

Lack of control on Ajax requests cause more severe problem. I would like to share some sample of poor design that cause slowness on page loading.

1. Loading page content with many Ajax requests

This approach is quite popular because it let user feel the progress of page loading and can enjoy some important parts of contents while waiting for the rest of contents to be loaded. There is nothing wrong with this but thing is getting worse when you need more requests to load content that the browser can supply you with. Let say if you create 12 Ajax requests but your browser limit is 6, in best case scenario, you still need to load resources in two batches. It is still not too bad if these 12 requests are not nesting or consecutive executed. Then browser can make use of all available connections to serve the pending requests. Worse situation happen when one request is initiated in another request callback (nested Ajax requests). If this happen, your webpage is slowed down by your design rather than by browser limit.

Few years ago,  I took over one project, which is haunted with performance issue. There are many factors that causing the slowness but one concern is too many Ajax requests. I opened browser in debug mode and found more than 6 requests being sent to servers to load different parts of page. Moreover, it is getting worse as the project is delivered by teams from different continents, different time zone. Features are developed in parallel and the developer working on a feature conveniently add server endpoint and Ajax request to let work done. Worrying that the situation is going out of control, we decided to shift the direction of development. The original design is like this:


For most of Ajax requests, the response return JSON model of data. Then, the Knock-out framework will do the binding of html controls with models. We do not face the nested requests issue here but the loading time cannot be faster because of browser limit and many http threads is consumed to serve a single page load. One more problem is the lack of caching. The page contents are pretty static with minimal customization on some parts of webpages.

After consideration, we decided to do a reset to the number of requests by generating page contents in one page. However, if you do not do it properly, it may become like this:


This is even worse than original design. It is more or less equal to having the limit of 1 connection to server and all the requests are handled one by one.

The proper way to achieve similar performance use Aysnc Programming


Each promise can be executed in a separate thread (not http thread) and the response is returned when all the promises are completed. We also apply caching to all of the services to ensure the service to return quickly. With the new design, the page response is faster and server capacity is improved as well.

2. Fail to manage the request queue

When you make a Ajax request in javascript and browser do not have any available connection to serve your request, it will temporarily put the request to the request queue. Disaster happens when developers fail to manage the request queue properly. This often happens with rich client application. Rich client application functions more like an application than a web page. Clicking on button should not trigger loading new web address. Instead, the page content is uploaded with result of Ajax requests. The common mistake is to let new requests to be created when you have not managed to clean up the existing requests in queue.

I have worked on a web application that make more than 10 Ajax requests when user change value of a first level combo box. Imagine what happen if user change the value of the combo box 10 times consecutively without any break in between? There will be 100 Ajax requests go to request queue and your page seem hanging for a few minutes. This is an intermittent issue because it only happen if user manage to create Ajax requests faster than the browser can handle.

The solution is simple, you have two options here. For the first option, forget about rich client application, refreshing the whole page to load new contents. To persist the value of the combo box, store it as a hash attached to the current URL address. In this case, browser will clear up the queue. The second option is even simpler, block user from making change to combo box if the queue is not yet cleared. To avoid bad experience, you can show the loading bar while disabling the combo box.

3. Nesting of Ajax requests

I have never seen a business requirement for nesting Ajax request. Most of the time I saw nesting request, it was design mistake. For example, if you are a lazy developer and you need to load flags for every country in the world, sorting by continent. Disaster happen when you decide to write code this way:

  • Load the continent list
  • For each continent, loading countries
Assume the world have 5 continents, then you spawn 1 + 5 = 6 requests. This is not necessary as you can return a complex data structure that contain all of these information. Making requests is expensive, making nesting request is very expensive, using Facade pattern to get what you want in a single call is the way to go. 

十月 16th, 2014CentOS6下的Samba服务



yum install samba
chkconfig smb on
useradd smb
smbpasswd -a smb
service smb restart


    comment = Share Directories
    path = /opt/data/share
    browseable = yes
    writable = no
    valid users = smb
    comment = Share Directories
    path = /opt/data/upload
    browseable = yes
    writable = yes
    valid users = smb


  • 将upload目录的权限置成smb用户可写
  • 将selinux关闭,不重启系统关闭:setenforce 0


© 2010 高飞鸟 – Highbird | Powered by Wordpress