0%

以前也尝试使用ubuntu,但是苦于连个像样的拼音输入法都没有,用了一段时间之后觉得太不友好了。现在的开源社区真是繁荣了,好多软件都有linux版本了。

一些必备工具

  1. 搜狗拼音输入法

    搜狗拼音输入法是最好用的拼音输入法之一,现在不只支持传统的Windows和Android系统,还支持Linux和Mac系统,甚至IOS都已经可以使用了,可见其在拼音输入法领域内的地位。

  2. 有道词典

    有道词典也支持几乎所有的流行平台,是看paper和科学上网的必备利器。

  3. 微信客户端

    微信客户端用的是Electronic WeChat是利用Electron开源框架开发的一款第三方微信客户端,支持Linux和MacOS X系统。

  4. 网易云音乐

    网易云音乐听音乐的利器,也支持各大操作系统。

  5. Steam

    以往linux系统的娱乐性很差,现在有了Steam,各种大型游戏也不在话下了。

  6. QQ

    腾讯对linux系统的支持比较差,上面提到的微信客户端也是爱好者自己利用网页版微信开发的。我在linux用QQ的解决方法来自这里,网上也有很多类似的教程,退而求其次地用wine qq吧。

Gstreamer

The official website of Gstreamer is here. What is Gstreamer is quoted from its website:

GStreamer is a library for constructing graphs of media-handling components. The applications it supports range from simple Ogg/Vorbis playback, audio/video streaming to complex audio (mixing) and video (non-linear editing) processing.

Applications can take advantage of advances in codec and filter technology transparently. Developers can add new codecs and filters by writing a simple plugin with a clean, generic interface.

GStreamer is released under the LGPL. The 1.x series is API and ABI stable and supersedes the previous stable 0.10 series. Both can be installed in parallel.

A Simple Tutorial

I started to learn to use Gstreamer just two days ago, and I found a very useful tutorial called Gstreamer Small Tutorial authored by Arash Shafiei. One could use this tutorial as a stepping-stone to develop more complex applications. A more detailed tutorial can be found in Gstreamer’s website.

My Own Trial

Though Arash Shafiei provided an excellent sample, there is some modifications that need to be done to run the application to play video correctly.

  1. more recent API should be used by replacing gst_pad_get_caps with gst_pad_query_caps.
  2. there’s a mistake in Arash Shafiei’s sample code of static void pad_added_handler(GstElement *src, GstPad *new_pad, CustomData *data). The original code will not link video sink if the audio sink has already linked no matter whether video sink is linked or not.
  3. a demuxer is added to the pipeline to handle the input of a file source element.

my own code can be found following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
#include <gst/gst.h>
#include <glib.h>
/* Structure to contain all our information, so we can pass it to callbacks */
typedef struct _CustomData
{
GstElement *pipeline;
GstElement *source;
GstElement *demuxer;
GstElement *video_convert;
GstElement *audio_convert;
GstElement *video_sink;
GstElement *audio_sink;
} CustomData;
/* Handler for the pad-added signal */
/* This function will be called by the pad-added signal */
static void pad_added_handler(GstElement *src, GstPad *new_pad, CustomData *data)
{
GstPad *sink_pad_audio = gst_element_get_static_pad(data->audio_convert, "sink");
GstPad *sink_pad_video = gst_element_get_static_pad(data->video_convert, "sink");
GstPadLinkReturn ret;
GstCaps *new_pad_caps = NULL;
GstStructure *new_pad_struct = NULL;
const gchar *new_pad_type = NULL;
g_print("Received new pad '%s' from '%s':\n", GST_PAD_NAME(new_pad), GST_ELEMENT_NAME(src));

/* Check the new pad's type */
new_pad_caps = gst_pad_query_caps(new_pad, 0);
new_pad_struct = gst_caps_get_structure(new_pad_caps, 0);
new_pad_type = gst_structure_get_name(new_pad_struct);
if (g_str_has_prefix(new_pad_type, "audio/x-raw"))
{
/* If our audio converter is already linked, we have nothing to do here */
if (gst_pad_is_linked(sink_pad_audio))
{
g_print(" Type is '%s'.\n", new_pad_type);
g_print(" We are already linked. Ignoring.\n");
goto exit;
}
/* Attempt the link */
ret = gst_pad_link(new_pad, sink_pad_audio);
if (GST_PAD_LINK_FAILED(ret))
{
g_print(" Type is '%s' but link failed.\n", new_pad_type);
}
else
{
g_print(" Link succeeded (type '%s').\n", new_pad_type);
}
}
else if (g_str_has_prefix(new_pad_type, "video/x-raw"))
{
/* If our video converter is already linked, we have nothing to do here */
if (gst_pad_is_linked(sink_pad_video))
{
g_print(" Type is '%s'.\n", new_pad_type);
g_print(" We are already linked. Ignoring.\n");
goto exit;
}
/* Attempt the link */
ret = gst_pad_link(new_pad, sink_pad_video);
if (GST_PAD_LINK_FAILED(ret))
{
g_print(" Type is '%s' but link failed.\n", new_pad_type);
}
else
{
g_print(" Link succeeded (type '%s').\n", new_pad_type);
}
}
else
{
g_print(" It has type '%s' which is not raw audio. Ignoring.\n", new_pad_type);
goto exit;
}
exit:
/* Unreference the new pad's caps, if we got them */
if (new_pad_caps != NULL)
gst_caps_unref(new_pad_caps);
/* Unreference the sink pad */
gst_object_unref(sink_pad_audio);
gst_object_unref(sink_pad_video);
}

int main(int argc, char *argv[])
{

if(argc != 2)
{
g_printerr("usage: ./player <path_to_a_video>\n");
return 0;
}

CustomData data;
GstBus *bus;
GstMessage *msg;
GstStateChangeReturn ret;
gboolean terminate = FALSE;
/* Initialize GStreamer */
gst_init(&argc, &argv);
/* Create the elements */
data.source = gst_element_factory_make("filesrc", "source");
data.demuxer = gst_element_factory_make("decodebin", "demuxer");
data.audio_convert = gst_element_factory_make("audioconvert", "audio_convert");
data.audio_sink = gst_element_factory_make("autoaudiosink", "audio_sink");
data.video_convert = gst_element_factory_make("videoconvert", "video_convert");
data.video_sink = gst_element_factory_make("autovideosink", "video_sink");
/* Create the empty pipeline */
data.pipeline = gst_pipeline_new("test-pipeline");
if (!data.pipeline || !data.source || !data.audio_convert ||
!data.audio_sink || !data.video_convert || !data.video_sink)
{
g_printerr("Not all elements could be created.\n");
return -1;
}
/* Build the pipeline. Note that we are NOT linking the source at this point. We will do it later. */
gst_bin_add_many(GST_BIN(data.pipeline), data.source, data.demuxer,
data.audio_convert, data.audio_sink, data.video_convert, data.video_sink, NULL);
if (!gst_element_link(data.source, data.demuxer))
{
g_printerr("Elements could not be linked.\n");
gst_object_unref(data.pipeline);
return -1;
}
if (!gst_element_link(data.audio_convert, data.audio_sink))
{
g_printerr("Elements could not be linked.\n");
gst_object_unref(data.pipeline);
return -1;
}
if (!gst_element_link(data.video_convert, data.video_sink))
{
g_printerr("Elements could not be linked.\n");
gst_object_unref(data.pipeline);
return -1;
}
/* Set the file to play */
g_object_set(data.source, "location", argv[1], NULL);
/* Connect to the pad-added signal */
g_signal_connect(data.demuxer, "pad-added", G_CALLBACK(pad_added_handler), &data);
/* Start playing */
ret = gst_element_set_state(data.pipeline, GST_STATE_PLAYING);
if (ret == GST_STATE_CHANGE_FAILURE)
{
g_printerr("Unable to set the pipeline to the playing state.\n");
gst_object_unref(data.pipeline);
return -1;
}
/* Listen to the bus */
bus = gst_element_get_bus(data.pipeline);
do
{
msg = gst_bus_timed_pop_filtered(bus, GST_CLOCK_TIME_NONE,
(GstMessageType)(GST_MESSAGE_STATE_CHANGED | GST_MESSAGE_ERROR | GST_MESSAGE_EOS));
/* Parse message */
if (msg != NULL)
{
GError *err;
gchar *debug_info;
switch (GST_MESSAGE_TYPE(msg))
{
case GST_MESSAGE_ERROR:
gst_message_parse_error(msg, &err, &debug_info);
g_printerr("Error received from element %s: %s\n", GST_OBJECT_NAME(msg->src), err->message);
g_printerr("Debugging information: %s\n", debug_info ? debug_info : "none");
g_clear_error(&err);
g_free(debug_info);
terminate = TRUE;
break;
case GST_MESSAGE_EOS:
g_print("End-Of-Stream reached.\n");
terminate = TRUE;
break;
case GST_MESSAGE_STATE_CHANGED:
/* We are only interested in state-changed messages from the pipeline */
if (GST_MESSAGE_SRC(msg) == GST_OBJECT(data.pipeline))
{
GstState old_state, new_state, pending_state;
gst_message_parse_state_changed(msg, &old_state, &new_state, &pending_state);
g_print("Pipeline state changed from %s to %s:\n",
gst_element_state_get_name(old_state), gst_element_state_get_name(new_state));
}
break;
default:
/* We should not reach here */
g_printerr("Unexpected message received.\n");
break;
}
gst_message_unref(msg);
}
} while (!terminate);
/* Free resources */
gst_object_unref(bus);
gst_element_set_state(data.pipeline, GST_STATE_NULL);
gst_object_unref(data.pipeline);
return 0;
}

the compiling commond is

1
gcc player.c -o player `pkg-config --cflags --libs gstreamer-1.0`

微信很早就发布了一款在Windows下的PC客户端,在使用PC的时候可以方便与亲友聊天,而不需要频频举起手机。最近一段时间我总是在Ubuntu环境下使用电脑,很长一段时间以来只能用网页版的微信,今天发现了一个好东西——Electronic WeChat。

Electronic WeChat是利用Electron开源框架开发的一款第三方微信客户端,支持Linux和MacOS X系统。Electronic WeChat具有一些不错的特性,包括拖入图片、文件即可发送,显示贴纸消息,以及直接打开重定向的链接等等。

要在Linux下安装ElectronicWeChat,可以到这里选择适合自己平台的版本,例如我选择的是linux-x64.tar.gz版本,执行:tar zxvf linux-x64.tar.gz后直接运行electronic-wechat,然后使用手机扫描二维码即可登录。

First Post on 20161006

今天搞了一天Gstreamer,早晨九点起来,就开始弄,不知不觉就到了午饭时间,吃了饭回来继续,不知不觉又到了晚饭时间。第一次搞这种编解码的库,真是头大,搞了这么整整一天,还是没什么结果。只是大概觉得gstreamer的工作原理就是把一堆element整合到一个pipeline里,然后一个多媒体文件就通过这个pipline把音频、视频解码出来,并送到对应的输出设备里。

说起来好像不太难,而且用gst命令也可以播放视频,但是如何把它们通过C代码融合到自己的程序里呢,同时又怎么将视频全屏显示也是个问题,网上很多例子都是用GTK,这个又是个新东西。在github上搜了很多,但是也没有发现一个简单易用的例子,真是挠头。看来还是得耐下心来看文档里。

Update on 20161007

发现了一个很好的Gstreamer教程——《Gstreamer Small Tutorial》只有十页,读起来很顺畅,作者把Gstreamer的工作原理阐述得十分清晰,而且其中附有一个实例,对于理解有很大帮助。

Today I accidentally upgraded my Ubuntu 14.04 to Ubuntu 16.04. Then evidently Caffe can not be built. Several modifications are of need to bypass the issues.

  1. hack the Cuda source code. Suppress the error of unsupported GNU version! gcc versions later than 4.9 are not supported! by replacing #if __GNUC__ > 5 || (__GNUC__ == 5 && __GNUC_MINOR__ > 9) with #if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ > 9).
  2. config the Makefile of Caffe. Replace NVCCFLAGS += -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS) with NVCCFLAGS += -D_FORCE_INLINES -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)

Other steps of using Caffe can be found here and here.

最近看了不少小说和电影。

其中电影算是跟风看的,包括《釜山行》和《隧道》。小说的话还是东野圭吾的作品《毕业》和《时生》。

这两部韩国电影虽说硬伤不少,比如《釜山行》里脑残的车厢安排,《隧道》里手机电池的坚挺,不过要说扣人心弦,还真挺有套路的。就故事而言,完成度还不错,来龙去脉、节奏控制都比较完整,能够算是合格的工业线生产的类型片。比起很多国内的电影,这个水平已经是相当不错了。很多网友都说某某电影甩国产电影几条街,这两部也在某某之列,不说是不是有几条街远,就整体水平而言,韩国电影里的烂片可能在国内也能算是中等以上了。说来也奇怪,一些设置放在韩国电影或者电视剧里好像很自然,但是放在国内的电影里就相当出戏,比如《大鱼海棠》里的人物设置,我喜欢你,你喜欢他,我为你牺牲一切,看起来好像完全没问题,但是不知道为什么在《大鱼海棠》里就是那么不自然。不知道是我对韩国电影有了刻板印象,还是对国内电影太过苛求,总是感觉大部分国内电影在使用这样的人设时都质量堪忧啊。

两部东野圭吾的小说好像也没什么太多要说的,东野圭吾是个高产的作家,最近也看了很多,说不上有多好,很多情节设置好像有些牵强,或者巧合过多。东野圭吾的小说可能不是世界上最好的推理小说,但是他对人性的挖掘的确是深刻的,不管是对人性恶的一面还是善的一面,东野都会进行放大,让读者无需费什么劲儿就能感受到,从而也会反思生活中的我们是不是也是有这样的恶或善。

看了不少东西,不过让我突然有所感触的却是家里餐厅墙上挂着的一幅十字绣,十字绣的内容是《兰亭序》。以前上学的时候没有太大的感觉,只是背了下来,但是那天吃饭的时候又读了读,马上就联想到最近看的电影和小说,这些人都是在感叹:活着真好!《兰亭序》里的那一句“固知一死生为虚诞,齐彭殇为妄作”,不也是王羲之在欣赏了自然之美,感受了快意人生之后,希望自己可以多多享受这样的人生吗?曾几何时,我也好像有了类似的感觉,突然觉得美好的事物太多,想体验的人生太多,但是时间过得太快,都来不及慢慢体会,就已经飞逝而过了。

TITLE: Face Detection with End-to-End Integration of a ConvNet and a 3D Model

AUTHOR: Yunzhu Li, Benyuan Sun, Tianfu Wu, Yizhou Wang

ASSOCIATION: Peking University, North Carolina State University

FROM: arXiv:1606.00850

CONTRIBUTIONS

  1. It presents a simple yet effective method to integrate a ConvNet and a 3D model in an end-to-end learning with multi-task loss used for face detection in the wild.
  2. It addresses two limitations in adapting the state-of-the-art faster-RCNN for face detection: eliminating the heuristic design of anchor boxes by leveraging a 3D model, and replacing the generic and predefined RoI pooling with a configuration pooling which exploits the underlying object structural configurations.
  3. It obtains very competitive state-of-the-art performance in the FDDB and AFW benchmarks.

METHOD

The main scheme of inferring is shown in the following figure.

The input image is sent into a ConvNet, e.g. VGG, with an upsampling layer. Then the network will generate face proposals based on the score of summing the log probability of the keypoints, which is predicted by the predefined 3D face model.

some details

  1. The loss of keypoint labels is defined as

    where $\omega$ stands for the learnable weights of ConvNet, $m$ is the number of the keypoints, and $p_{l_i}^{\mathbf{x}_i}$ is the probability of the point in location $\mathbf{x}_i$, which can be obtained by annotations, belongs to label $l_i$.

  2. The loss of keypoit locations is defined as

    where $smooth(\cdot)$ is the smooth $l_1$ loss. For each ground-truth keypoint, we can generate a set of predicted keypoints based on the 3D face model and the 3D transformation parameters. If for each face we have $m$ keypoints, then we will generate m sets of predicted keypoints. For each keypoint, m locations will be predicted.

  3. The Configuration Pooling Layer is similar to the ROI Pooling Layer in faster-RCNN. Features are extracted based on the locations and relations of the keypoints, rather than based on the predefined perceptive field.

Recently I am searching for an easy-to-use library of media framework that can be embedded to my own codes. First I have tried ffmpeg and SDL2 libraries, which I found too difficult for me since I have little experience of developing multimedia related applications. Then I found VLC media player after I searched the internet using a key word of “media player library”. I thought maybe it was a good choice and searched related projects in github. There is a very simple demo project called Controlling VLC media player using OpenCV. I was extracted by the word “OpenCV” because I am an engineer in computer vision. After I clicked the link, BINGO! THAT’S ALL I NEED!

Install libvlc

  1. A simple command is used to install libvlc sudo apt-get install libvlc-dev.
  2. One may need to install vlc media player to get the plugins work sudo apt-get install vlc.

Use libvlc

The following code is a simple demo of how to use libvlc to play a video

#include <vlc/vlc.h>
// LibVLC requirements, plays video specified as a command line argument
libvlc_instance_t *instance = libvlc_new(0, NULL);
libvlc_media_t *media = libvlc_media_new_path(instance, argv[1]);    
libvlc_media_player_t *mplayer = libvlc_media_player_new_from_media(media);

while(1)
{
    //something to do while playing video
    //...
}

libvlc_media_release(media);
libvlc_release(instance);

The libvlc will handle the multimedia thread. All we need to do is controlling the player in main thread. More document can be found here.

前一阵在火车站随便买了一本《人类简史》,本打算是在车上无聊打发时间用,没想到看到了一本十分高质量的书。

刚刚看到书名的时候我还以为是一本讲解人类历史上大事件的书,因为自己对历史本来就很感兴趣,所以也没看看简介或者序文就直接买了。后来看了目录才发现这是一本完全不一样的历史书。书的英文原名是《Sapiens: A Brief History of Humankind》,Sapiens可以被译为“智人”,我们人类的生物学名称,这本书讲的就是“智人”如何从一种也没什么特别的动物逐渐演变成“变成神的动物”的。作者在书中既肯定了“智人”的成功,又警示了“智人”的毁灭。

第一章的开篇,作者这样写到

大约在135亿年前,经过所谓的“大爆炸”之后,宇宙的物质、能量、时间和空间才成了现在的样子。宇宙的这些基本特征,就成了“物理学”。

在这之后过了大约30万年,物质和能量开始形成复杂的结构,称为“原子”,再进一步构成“分子”。至于这些原子和分子的故事以及它们如何互动,就成了“化学”。

大约38亿年前,这个叫做地球的行星上,有些分子结合起来,形成一种特别庞大而又精细的结构,称为“有机体”。有机体的故事,就成了“生物学”。

到了大约7万年前,一些属于“智人”这一物种的生物,开始创造出更加复杂的架构,称为“文化”。而这些人类文化继续发展,就成了“历史学”。

这短短的一段话真是醍醐灌顶,我还从来没有听过什么人从这个角度来阐述人类的发展历程。感觉这就已经完美概括了人类的历史——人类的起源是什么,人类的活动是什么。

作者认为人类历史有三大重要革命:发生在大约7万年前的“认知革命”让历史正式启动;大约12000年前的“农业革命”让历史加速发展;大约500年前的“科学革命”让历史画下句点而另创新局。整本书也正是从这三大革命进行展开,逐渐阐述并展示给读者,这三大革命如何改变了人类和这个世界。

书中从生物进化、文化演进、科学发展等多个角度阐述了上面提到的三大革命。其中既有高深的学术研究成果,也有幽默有趣的实例,还包括一些我感觉近似哲学的论述。

第一个让我印象深刻的论述就发生在“认知革命”时期,这一论述甚至让我觉得人类就是一种带着“原罪”的生物。我们人类总是认为自己是神创造的独一无二的生物,但是其实在生物学的演化过程中,我们所属的物种可不只“智人”这一支,“human”的真正意思是“属于人属的动物”,包括尼安德特人、直立人、梭罗人、弗洛里斯人、丹尼索瓦人等等,就好像我们养的宠物狗有各种各样的品种,人类也一样,但是现在世界上只剩下了我们“智人“一种人类,而”智人“通过血腥的手段灭绝了其他人种。先不论后面在”农业革命“和”科学革命“时期人类对环境危机和物种危机的责任,单单残害异种人类就让人感到脊背发凉。

第二个印象深刻的论述让我刚刚说的话颇显唯心,甚至让整个人类的文化都显得唯心。作者在书中对这一部分的标题为”想象构建的秩序存在于人和人之间思想的连接“,这一句话读起来可是够拗口。文中以”标致“公司为例,阐述了这一观点,我们该以什么样的标准才能说标致公司确实存在。标致公司生产的实实在在的汽车,不能代表标致公司,因为如果我们把这些汽车全部销毁,我们仍然认为标致公司可以继续制造出属于标致公司的汽车。标致公司的员工也不能代表标致公司,因为如果标致公司的全部员工罹难,公司还是可以重新招聘新的员工。没有任何一个实体的东西可以代表标致公司,标致公司只是我们的一个集体想象。我们想毁灭掉标致公司,就必须从法律上消灭它,例如我们判定标致公司属于非法组织进行取缔。然而法律是什么呢,依旧是我们的一个集体想象,如果我说我不相信法律了,那么我肯定会受到法律的制裁,那是因为绝大多数人都相信法律,但如果是绝大多数人都不相信法律呢?

这两个论述让我冒出了很多疑问。比如当我们遇到一些烦恼的时候,我们是不是可以反思自己是否处于一个集体想象中呢?我们可不可以稍稍跳出一下这个集体想象?如果所有的集体想象都没了我们是否还能称之为”人“呢?说到这里,又有一个疑问,既然我们承认”人“是平平常常的一个物种,我们为什么还要思考我们是否能被称之为”人“?

Joshua-s-Blog

I start up this repo to manage the code of my personal website JOSHUA’s BLOG, which is set up on WebFaction and implemented by Django and several other packages. The repository is here.

In addition to manage my own code, viewers can also use this project to learn how to build a website using Django. I will write down the works that need to be done to use these code.

Finally, welcome to visit JOSHUA’s BLOG.

Environment

  1. Ubuntu 14.04. Though Ubuntu 16.04 has been released for a while, I am still using an older LTS release. I am not sure the following instruction can work fine on Ubuntu 16.04.
  2. Apache 2.4. I chose Apache 2.4 because my deploy server is powered by Apache 2.4. The Apache HTTP Server Project is an effort to develop and maintain an open-source HTTP server for modern operating systems including UNIX and Windows.
  3. WSGI 1.5. WSGI is the Web Server Gateway Interface. It is a specification that describes how a web server communicates with web applications, and how web applications can be chained together to process one request.
  4. Django 1.7.1. Hmm… I am still using an ancient version of Django. It is also because I wish I could have an exactly same developing environment with the deploy one. Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of Web development, so you can focus on writing your app without needing to reinvent the wheel. It’s free and open source.

Instructions

Install Apache 2.4

Apache can be installed easily using the apt-get. sudo apt-get install apache2. Then we can use apacheclt -v to check the version of Apache and its success of installation.

Install mod_wsgi 1.5

mod_wsgi can be installed using Pip, which can be installed using sudo apt-get install python-pip.

  1. install mod_wsgi using pip install mod_wsgi. I met an error of missing Apache httpd server packages. here. I bypassed this error by sudo apt-get install apache2-mpm-worker apache2-dev.
  2. download mod_wsgi-3.5.tar.gz from here.
  3. extract files from the package using tar xvfz mod_wsgi-3.5.tar.gz.
  4. enter the directory cd cd mod_wsgi-3.5.
  5. config by ./configure
  6. make
  7. sudo make install

Install Django 1.7.1

Install Django using command pip install Django==1.7.1. To verify its installation, try to import django in python console. If no error is raised, then the installation is successful.

Set up joshua_blog project with Apache

Note: after download the project, you should change the folder name to joshua_blog.

  1. add the following code to the file of /etc/apache2/apache2.conf

     LoadModule wsgi_module /usr/lib/apache2/modules/mod_wsgi.so
     WSGIScriptAlias / /home/joshua/CODE/PYTHON/joshua_blog/joshua_blog/wsgi.py
     Alias /media/ /home/joshua/CODE/PYTHON/joshua_blog/media/
     Alias /static/ /home/joshua/CODE/PYTHON/joshua_blog/static/
    
     <Directory /home/joshua/CODE/PYTHON/joshua_blog/static>
         Require all granted
     </Directory>
    
     <Directory /home/joshua/CODE/PYTHON/joshua_blog/media>
         Require all granted
     </Directory>
    
     <Directory /home/joshua/CODE/PYTHON/joshua_blog/joshua_blog>
         <Files wsgi.py>
             Require all granted
         </Files>
     </Directory>
    
     WSGIPythonPath /home/joshua/anaconda/lib/python2.7/site-packages
     ServerName localhost:80
    
  2. add the following code to the file of joshua_blog/joshua_blog/wsgi.py

     import sys
     sys.path.append("/path/joshua_blog/")
    

    in my case, the path is /home/joshua/CODE/PYTHON/.

  3. restart apache server by sudo service apache2 restart

Install dependency modules

Coming here, we have set up the environment. But when we visit 127.0.0.1, we found that the web server return 500 error. It is because several modules needed by joshua_blog are of need to be installed manually. We can see which modules are missed in the file of /var/log/apache2/error.log. Next we will see how to install these modules.

  1. django-bootstrap. The author suggests to install this app using pip install django-bootstrap, but the latest version of the app requires Django>=1.8. Thus we need to download django-bootstrap from 6.x.x branch, which can be found here. Extract the package and enter its directory. Install by python setup.py install.
  2. django-filemanager. The installation instruction can be found here.
  3. django-disqus. Install by pip install django-disqus. A brief introduction in Chinese can be found here
  4. unidecode. Install by pip install unidecode.
  5. markdown2. Install by pip install markdown2. A brief introduction in Chinese can be found here.

At last we can visit our website at 127.0.0.1. We can log in as the supervisor using the username of joshua and the password of joshua. We may meet other issues:

  1. Here maybe we will get an error message reading “attempt to write a readonly database”. We should change its mode by chmod 666 db.sqlite3.
  2. Then another error comes as “unable to open database file”. Then we should change the owner of the whole project sudo chown www-data joshua_blog.