Simultaneous use of 3 usb camera inputs on Windows 10?
sink writer and h264 colorspace flags
Hi
I'm using the Media Foundation Sinkwriter with the h264 encoder. I'm not having any success trying to set the colorspace flags in the h264 stream (color primaries, transfer function, yuv color matrix, see Annex E of the h264 specs, 'Video usability information'). While some video players, like Windows Media Player or VLC, are able to deduce the correct colorspace (usually bt.601 for SD video and bt.709 for HD) from the video resolution, other players like Quicktime or the Windows 'Movies & TV' app need these flags in order to select the correct space. The latter players will default to bt.601 even on HD video if no flags are present, resulting in a color shift.
I have tried setting the MF_MT_VIDEO_PRIMARIES, MF_MT_TRANSFER_FUNCTION and MF_MT_YUV_MATRIX attributes on both the input and output media types while constructing the sink writer. Unfortunately they don't seem to have any effect. No flags are written to the resulting MP4 file.
This leads to the ironic situation where the colors of an MP4 written on Windows with the sinkwriter are incorrect when opened by, for example, the 'Movies & TV' app. While the same video encoded using the AVFoundation on OSX will show the correct colors in the 'Movies & TV' app, because AVFoundation writes the appropriate color space tags to the h264 stream.
Any help would be greatly appreciated.
Thanks
Alex
Trying to start MYOB
Talk to Media Foundation from Linux
Win7 + H.264 Encoder + IMFSinkWriter Can't use Quality VBR encoding?
I'm trying to alter the encoder quality property eAVEncCommonRateControlMode_Quality via ICodecAPI.
However the setting is ignored as started in the documentation which says the property must be set before IMFTransform::SetOutputType is called.
Now here is the problem: the sink writer seems to call IMFTransform::SetOutputType when we call SetInputMediaType on the sink writer, however if we don't call SetInputMediaType we can't retrieve the ICodecAPI interface via sinkWriter.GetServiceForStream (throws exception) to change the quality setting..seems like a catch 22. I'm hoping it's me and not just a design flaw in the APIs.
Setting the quality property works on Win8, as Win8 does not ignore if when called after IMFTransform::SetOutputType is called.
Help!!
HEVC Decoder MFT Sample
I am trying to use H265 HEVC Video decoder with MFT as described here:
https://msdn.microsoft.com/en-us/library/windows/desktop/mt218785(v=vs.85).aspx
Here is the code to create the decoder:
MFStartup(MF_VERSION, MFSTARTUP_FULL); CoCreateInstance(__uuidof(CMSH264DecoderMFT), NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(m_pDecoder)) MFCreateMediaType(&m_pInputStreamType); m_pInputStreamType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video); m_pInputStreamType->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_HEVC_ES); rc = m_pDecoder->SetInputType(0, m_pInputStreamType, 0);
It fails at the last statement, rc = 0xc00d36b4 : The data specified for the media type is invalid, inconsistent, or not supported by this object.
I am using Window10 professional. Is there a sample to show how the decoding could be done and output to a raw buffer?
Microsoft Media Foundation and patent payments for codecs usage. Who pays royalties?
Hello,
can we legally use h.264/aac/mp3 codecs available in Microsoft Media Foundation framework in our commercial (paid) application for Win 7/8.1/10 OS? I do not understand if Microsoft covers royalties of patent pools companies like MpegLA (h264), Via Licensing (aac) etc. if we use media foundation framework or developers have to sign license agreements and pay royalties to patent pool companies anyway? I can't find any information on this subject. Where can I find more official information? Thanks!
Does MediaFoundation's H.264 encoding require royalty?
I am using MediaFoundation's H.264 encoder for a commercial software. Does it need a royalty?
The max resolution for mp4(h264) encoder
hi guys
I want to know what is the max resolution for mp4(h264) encoder in media foundation. I can't use sinkwriter to encoder a 4k resolution mp4 file.
Thanks.
msmpeg2vdec.pdb?
I'm looking for msmpeg2vdec.pdb, to try and debug crashes happening in msmpeg2vdec.dll.
Note: I only get crash reports from customers, and I haven't been able to reproduce them myself, that's why I'm really desperate for symbols!
Failing that, any clue about this crash? In msmpeg2vdec.dll version 12.0.9200.17037, it crashes @0x1230b2 trying to read at 0x700 (a "mov r10d,dword ptr [rax+700h]" where rax=0).
SetInputMediaType returning MF_E_INVALIDMEDIATYPE
We have existing functionality in an application for writing uncompressed images generated from a screen capture to an mp4 file. The functionality is largely based on the Sink Writer tutorial. The functionality has worked well on any Windows 7 machine, but seems to be failing consistently on Windows 8.1 or Windows 10. The failure occurs when calling SetMediaInputType on the IMFSinkWriter, where MF_E_INVALIDMEDIATYPE is always returned. This result persists if the encoding format is changed (to MFVideoFormat_WMV3 for instance), if the size of the image to be written to file is reduced, if frame rate is modified, etc. A snapshot of the code where the IMFSinkWriter is initialized follows:
const UINT32 VIDEO_FPS = 30;
const UINT64 VIDEO_FRAME_DURATION = 10 * 1000 * 1000 / VIDEO_FPS;
const UINT32 VIDEO_BIT_RATE = 800000;
const GUID VIDEO_ENCODING_FORMAT = MFVideoFormat_H264;
const GUID VIDEO_INPUT_FORMAT = MFVideoFormat_RGB32;
HRESULT CMP4Writer::InitializeMFWriter(WCHAR * filename, int frameWidth, int frameHeight, int frameRate)
{
IMFMediaType *pMediaTypeOut = NULL;
IMFMediaType *pMediaTypeIn = NULL;
DWORD streamIndex;
wchar_t * pBuffer = 0;
HRESULT hr = S_OK;
CHECK_HR(hr = MFCreateSinkWriterFromURL(filename, NULL, NULL, &m_pWriter));
CHECK_HR(hr = MFCreateMediaType(&pMediaTypeOut));
CHECK_HR(hr = pMediaTypeOut->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video));
CHECK_HR(hr = pMediaTypeOut->SetGUID(MF_MT_SUBTYPE, VIDEO_ENCODING_FORMAT));
CHECK_HR(hr = pMediaTypeOut->SetUINT32(MF_MT_AVG_BITRATE, frameHeight * frameWidth * VIDEO_FPS * 4 * 32));
CHECK_HR(hr = pMediaTypeOut->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive));
CHECK_HR(hr = MFSetAttributeSize(pMediaTypeOut, MF_MT_FRAME_SIZE, frameWidth, frameHeight));
CHECK_HR(hr = MFSetAttributeRatio(pMediaTypeOut, MF_MT_FRAME_RATE, VIDEO_FPS, 1));
CHECK_HR(hr = MFSetAttributeRatio(pMediaTypeOut, MF_MT_PIXEL_ASPECT_RATIO, 1, 1));
CHECK_HR(hr = pMediaTypeOut->SetUINT32(MF_MT_MPEG2_PROFILE, eAVEncH264VProfile_Main));
CHECK_HR(hr = m_pWriter->AddStream(pMediaTypeOut, &streamIndex));
CHECK_HR(hr = MFCreateMediaType(&pMediaTypeIn));
CHECK_HR(hr = pMediaTypeIn->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video));
CHECK_HR(hr = pMediaTypeIn->SetGUID(MF_MT_SUBTYPE, VIDEO_INPUT_FORMAT));
CHECK_HR(hr = pMediaTypeIn->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive));
CHECK_HR(hr = pMediaTypeIn->SetUINT32(MF_MT_DEFAULT_STRIDE, 4 * frameWidth));
UINT cbImage = 0;
CHECK_HR(hr = MFCalculateImageSize(VIDEO_INPUT_FORMAT, frameWidth, frameHeight, &cbImage));
CHECK_HR(hr = pMediaTypeIn->SetUINT32(MF_MT_SAMPLE_SIZE, cbImage));
CHECK_HR(hr = pMediaTypeIn->SetUINT32(MF_MT_FIXED_SIZE_SAMPLES, TRUE));
CHECK_HR(hr = pMediaTypeIn->SetUINT32(MF_MT_ALL_SAMPLES_INDEPENDENT, TRUE));
CHECK_HR(hr = MFSetAttributeSize(pMediaTypeIn, MF_MT_FRAME_SIZE, frameWidth, frameHeight));
CHECK_HR(hr = MFSetAttributeRatio(pMediaTypeIn, MF_MT_FRAME_RATE, VIDEO_FPS, 1));
CHECK_HR(hr = MFSetAttributeRatio(pMediaTypeIn, MF_MT_PIXEL_ASPECT_RATIO, 1, 1));
CHECK_HR(hr = m_pWriter->SetInputMediaType(streamIndex, pMediaTypeIn, NULL));
CHECK_HR(hr = m_pWriter->BeginWriting());
SafeRelease(&pMediaTypeOut);
SafeRelease(&pMediaTypeIn);
return 0;
}
Frankle
SampleGrabberSink respect MF_TOPONODE_WORKQUEUE_ID
I am using the SampleGrabberSink in an real-time application where multiple sources are being decoded at the same time. I am able to set MF_TOPONODE_WORKQUEUE_ID on the source node so that each source is running in its own work queue but I'm still seeing the sample grabber calling my callback from the same thread. Is there a way to force the samplegrabber to create its own work queue for each topology? I have another node in the same topology that is running its own queue.
Jay
New to Movie Editing
Are there helps teaching newbies like me how to use the basic tools in Movie Maker?
e.g. starting a project, using the trim tool, merging audio with the video, etc.
If there are helps how do I get them?
Thanks for the help;
L. Low
Personalization
- Take me to font changes text changes and colors
Is result of IMF2DBuffer::ContiguousCopyTo always top-down?
The documentation is not really clear on wether the data written by the ContiguousCopyTo method of the IMF2DBuffer interface is always top-down or if it preserves/depends on pitch/stride of the source data.
I stumbled upon the MFCreateDXSurfaceBuffer function which increases my confusion:
fBottomUpWhenLinear [in]
If TRUE, the buffer's IMF2DBuffer::ContiguousCopyTo method copies the buffer into a bottom-up format. The bottom-up format is compatible with GDI for uncompressed RGB images. If this parameter is FALSE, the ContiguousCopyTo method copies the buffer into a top-down
format, which is compatible with DirectX.
So how can I find out if the pitch/stride of the contiguous result data is positive or negative?
Can Window Media Player 12 handle playlist with time codes?
I am very new to Window Media Player, but I have a very good knowledge in programming.
I am wondering if the latest Window Media Player handle a playlist composed of the Time Code of the Start and Ending of a video clip ?
I have seen a demo of a playlist created by putting various video clips into a folder and directing WMP to play it.
I would like to have a video file and play video clips as defined from the Time Code or Time Tag of the starting point and play to the Ending point. Can WMP handle this and how do I use it ???
I normally would like to use a powerpoint slide show with an embedded WMP with the defined time codes to play a selected video. Each of my slides would have a different time code to show different video clips.
I would appreciate any suggestions on this matter !!
Thank You,
Using Sequencer Source with editing sequences
I am trying to write an application that plays back different sequences of edits seamlessly from a single source file that contains all the possible cuts within it. I have managed to create a deque <Cut> object where the structure Cut is simply:
struct
Cut{MFTIME start;
MFTIME stop;
};
This reports the start and stop time of any given cut in MFTIME format from a timecode string like 00:07:12:13.
I have also initialised the media session to expect editing sequences thus:
IMFAttributes * pConfig = NULL;
if (SUCCEEDED(hr))
{
hr = MFCreateAttributes(&pConfig, 1);
}
//set MF_SESSION_GLOBAL_TIME to true so MediaSession can expect editing sequencesif (SUCCEEDED(hr)){
pConfig->SetUINT32(MF_SESSION_GLOBAL_TIME,
true);}
// Create the media session.if (SUCCEEDED(hr)){
hr = MFCreateMediaSession(pConfig, &m_pMediaSession);
LOG_IF_FAILED(L
"MFCreateMediaSession", hr);}
When creating the topologies, I add the MF_TOPOLOGY_PROJECTSTART and _PROJECTSTOP values in the CreateTopology method which is called in the AddEditSegment function:
HRESULT CPlayer::AddEditSegment(
const Cut& Edit, MFSequencerElementId *pSegmentId){TRACE((L
"CPlayer::AddEditSegment"));if (!pSegmentId){
return E_POINTER;}
HRESULT hr = S_OK;
IMFTopology *pTopology = NULL;
if (SUCCEEDED(hr)){
hr = MFCreateTopology(&pTopology);
}
if (SUCCEEDED(hr)){
hr =
this->CreateTopology(Edit,pTopology);}
if (SUCCEEDED(hr)){
hr =
this->AddTopologyToSequencer(pTopology, pSegmentId);}
return hr;}
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
HRESULT CPlayer::CreateTopology(
const Cut& Edit,IMFTopology *pTopology){if(!m_pMediaSource || !pTopology){
return E_POINTER;}
TRACE((L
"CPlayer::CreateTopology"));IMFPresentationDescriptor *pPresentationDescriptor = NULL;
DWORD cSourceStreams = 0;
HRESULT hr = S_OK;
//Create Presentation Descriptor for the media sourceif (SUCCEEDED(hr)){
hr = m_pMediaSource->CreatePresentationDescriptor(&pPresentationDescriptor);
LOG_IF_FAILED(L
"IMFMediaSource::CreatePresentationDescriptor", hr);}
if (SUCCEEDED(hr)){
hr = pPresentationDescriptor->GetStreamDescriptorCount(&cSourceStreams);
LOG_IF_FAILED(L
"IMFPresentationDescriptor::GetStreamDescriptorCount", hr);}
TRACE((L
"Stream count: %d", cSourceStreams));if (SUCCEEDED(hr)){
for (DWORD i = 0; i < cSourceStreams; i++){
hr = CreateNodesForStream(Edit,pPresentationDescriptor,pTopology, i );
if (FAILED(hr)){
break;}
}
}
if (SUCCEEDED(hr)){
hr = pTopology->SetUINT64(MF_TOPOLOGY_PROJECTSTART,Edit.start);
}
if (SUCCEEDED(hr)){
hr = pTopology->SetUINT64(MF_TOPOLOGY_PROJECTSTOP,Edit.stop);
}
SAFE_RELEASE(pPresentationDescriptor);
return hr;}
The net effect of this, if for example I have a Cut object which begins at 7 secs and lasting for 2 secs is to get the master file to play at its start for 2 secs (ie from 00:00:00:00 to 00:00:02:00 instead of playing 00:00:07:00 to 00:00:09:00 ), after 7 secs have elapsed of black. I had hoped that the pipeline would seek automatically to 7 secs and start playing from there. Am I missing something here?
Also, if I try to add more than one Cut/ Topology to the sequencer source I get absolutely nothing.
Any pointers any one?
Overlay GPS and Date and Time
I am working on project to capture live streaming from web camera with 1080p resolution. The sample project create a video file and i need to include data and time with my gps data in the video file. Can you please help us how to do that?
VK
TopoEdit - Tee nodes are broken?
I just installed the latest Windows 10 SDK so I could use TopoEdit to start some Media Foundation work. When I add a Tee node, it has only a input pin- zero outputs. After connecting the Tee's input pin, there are still no output pins. That's not expected, is it?
Thank you,
Josh