花满楼原创
小白:你之前介绍过使用nginx来实现直播,使用摄像头来录制,这些知识已经可以做到推流了。
花满楼:之前是使用ffmpeg命令来推流,控制度不够高,现在以代码的方式来实现,可灵活控制。
本文介绍如何写代码实现直播的推流功能。
最终的效果是这样的:
演示推流的代码:
#include <stdio.h>
#include "ffmpeg/include/libavformat/avformat.h"
#include "ffmpeg/include/libavcodec/avcodec.h"
void publishstream() {
const char* srcfile = "t.mp4";
const char* streamseverurl = "rtmp://localhost/rtmpdemo/test1";
av_register_all();
avformat_network_init();
av_log_set_level(AV_LOG_DEBUG);
int status = 0;
AVFormatContext* formatcontext = avformat_alloc_context();
status = avformat_open_input(&formatcontext, srcfile, NULL, NULL);
if (status >= 0) {
status = avformat_find_stream_info(formatcontext, NULL);
if (status >= 0) {
int videoindex = -1;
for (int i = 0; i < formatcontext->nb_streams; i ++) {
if (formatcontext->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
videoindex = i;
break;
}
}
if (videoindex >= 0) {
AVFormatContext* outformatcontext;
avformat_alloc_output_context2(&outformatcontext, NULL, "flv", streamseverurl);
if (outformatcontext) {
status = -1;
for (int i = 0; i < formatcontext->nb_streams; i ++) {
AVStream* onestream = formatcontext->streams[i];
AVStream* newstream = avformat_new_stream(outformatcontext, onestream->codec->codec);
status = newstream ? 0 : -1;
if (status == 0) {
status = avcodec_copy_context(newstream->codec, onestream->codec);
if (status >= 0) {
newstream->codec->codec_tag = 0;
if (outformatcontext->oformat->flags & AVFMT_GLOBALHEADER) {
newstream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
}
}
}
if (status >= 0) {
AVOutputFormat* outformat = outformatcontext->oformat;
av_usleep(5*1000*1000); // 故意等一下再开始推流,让拉流的客户端有时间启动,以拿到视频的pps/sps
if (!(outformat->flags & AVFMT_NOFILE)) {
av_dump_format(outformatcontext, 0, streamseverurl, 1);
status = avio_open(&outformatcontext->pb, streamseverurl, AVIO_FLAG_WRITE);
if (status >= 0) {
status = avformat_write_header(outformatcontext, NULL);
if (status >= 0) {
AVPacket packet;
int videoframeidx = 0;
int64_t starttime = av_gettime();
while (1) {
status = av_read_frame(formatcontext, &packet);
if (status < 0) {
break;
}
if (packet.pts == AV_NOPTS_VALUE) {
av_log(NULL, AV_LOG_DEBUG, "set pakcet.pts\n");
AVRational video_time_base = formatcontext->streams[videoindex]->time_base;
int64_t frameduration = (double)AV_TIME_BASE / av_q2d(formatcontext->streams[videoindex]->r_frame_rate);
packet.pts = (double)(videoframeidx * frameduration) / (double)(av_q2d(video_time_base) * AV_TIME_BASE);
packet.dts = packet.pts;
packet.duration = (double)frameduration / (double)(av_q2d(video_time_base) * AV_TIME_BASE);
}
if (packet.stream_index == videoindex) {
AVRational video_time_base = formatcontext->streams[videoindex]->time_base;
AVRational time_base_q = {1, AV_TIME_BASE};
int64_t cur_pts = av_rescale_q(packet.dts, video_time_base, time_base_q);
int64_t curtime = av_gettime() - starttime;
av_log(NULL, AV_LOG_DEBUG, "on video frame curpts=%lld curtime=%lld\n", cur_pts, curtime);
if (cur_pts > curtime) {
av_usleep(cur_pts - curtime);
}
}
AVStream* instream = formatcontext->streams[packet.stream_index];
AVStream* outstream = outformatcontext->streams[packet.stream_index];
packet.pts = av_rescale_q_rnd(packet.pts, instream->time_base, outstream->time_base, AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX);
packet.dts = av_rescale_q_rnd(packet.dts, instream->time_base, outstream->time_base, AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX);
packet.duration = av_rescale_q(packet.duration, instream->time_base, outstream->time_base);
packet.pos = -1;
if (packet.stream_index == videoindex) {
videoframeidx ++;
}
status = av_interleaved_write_frame(outformatcontext, &packet);
if (status < 0) {
break;
}
}
av_write_trailer(outformatcontext);
}
avio_close(outformatcontext->pb);
}
}
}
avformat_free_context(outformatcontext);
}
}
}
avformat_close_input(&formatcontext);
}
avformat_free_context(formatcontext);
}
int main(int argc, char *argv[])
{
publishstream();
return 0;
}
这里以本地的视频文件作为内容,模拟了直播推流,功能上相当于直接调用ffmpeg命令:
sudo ffmpeg -re -i Movie-1.mp4 -vcodec copy -f flv rtmp://localhost/rtmpdemo/test1
当然也可以边录制,边推送。
当然也可以在不同的电脑或手机上,拉流播放。
这里有一个前提,就是把nginx架设好并启动,可以参考“流媒体服务器,给你好看”这篇文章,它介绍了如何用nginx实现点播与直播。
直播开始后,这里的流服务器并没有给中途拉流的客户端发送视频解码所必须的参数(pps/sps),所以在测试的时候,要保证拉流端能拿到第一帧数据,比如演示代码中故意sleep几秒后才开始推流,让拉流端有时间开启并拿到推上去的所有数据(包括关键参数)。
对于h264的知识,或者对于FFmpeg使用的知识,可以参考之前的文章,也可以留意后续的更新。
小白:你说的话好长啊,而且没什么用,我还是看代码吧!
花满楼:如果你有掌握细节的必要,那最好自己写一遍代码。
小白:我只是看看!
写代码实现直播推流
原文地址:http://blog.51cto.com/13136504/2059555
时间: 2024-10-10 04:44:21