Quantcast
Channel: Media Foundation Development for Windows Desktop forum
Viewing all 1079 articles
Browse latest View live

IMFSourceReader::ReadSample or MFCaptureToFile memory leak?

$
0
0

I'm concerned that I may have a memory leak in the MFCaptureToFile sample, specifically related to IMFSourceReader::ReadSample.  

  • I believe this because on the task manager "Processes" tab I see the Memory for my program growing indefinitely, even as far as 25GB, which brings my computer to its knees, presumably from all the paged memory thrashing.
  • Nevertheless, when my program exits, all this memory is eventually freed (even in the 25GB case, after a couple minutes), so the memory isn't permanently lost.

I started with the MFCaptureToFile sample, so I've gone back to this sample to see if it has the same memory leak.  I don't have a working capture device at the moment, so I am temporarily re-routing to a file input.  So I re-created the trivial changes to make this happen, and MFCaptureToFile so modified indeed has the same memory leak.

Is it my modifications somehow, or does MFCaptureToFile actually have a memory leak?

Below are my trivial modifications.  With these done fresh to the MFCaptureToFile project, I see the growing memory usage in task manager.  So the leak is there already. 

Any comments please?

Trivial mods:

1) I added CreateMediaSource() that I got from some other sample, and then called it from MFCaptureToFile's StartCapture() function for the case that pActivate was NULL, having no capture device.  Below is the excerpt from capture.cpp.  OF course, I also declare CreateMediaSource in capture.h.

//-------------------------------------------------------------------
// CreateMediaSource
//
// Create a media source from a URL.
//-------------------------------------------------------------------

HRESULT CCapture::CreateMediaSource(PCWSTR sURL, IMFMediaSource **ppSource)
{
    MF_OBJECT_TYPE ObjectType = MF_OBJECT_INVALID;

    IMFSourceResolver* pSourceResolver = NULL;
    IUnknown* pSource = NULL;

    // Create the source resolver.
    HRESULT hr = MFCreateSourceResolver(&pSourceResolver);
    if (FAILED(hr))
    {
        goto done;
    }

    // Use the source resolver to create the media source.

    // Note: For simplicity this sample uses the synchronous method to create
    // the media source. However, creating a media source can take a noticeable
    // amount of time, especially for a network source. For a more responsive
    // UI, use the asynchronous BeginCreateObjectFromURL method.

    hr = pSourceResolver->CreateObjectFromURL(
        sURL,                       // URL of the source.
        MF_RESOLUTION_MEDIASOURCE,  // Create a source object.
        NULL,                       // Optional property store.&ObjectType,        // Receives the created object type.&pSource            // Receives a pointer to the media source.
        );
    if (FAILED(hr))
    {
        goto done;
    }

    // Get the IMFMediaSource interface from the media source.
    hr = pSource->QueryInterface(IID_PPV_ARGS(ppSource));

done:
    SafeRelease(&pSourceResolver);
    SafeRelease(&pSource);
    return hr;
}

//-------------------------------------------------------------------
// StartCapture
//
// Start capturing.
//-------------------------------------------------------------------

HRESULT CCapture::StartCapture(
    IMFActivate *pActivate,
    const WCHAR *pwszFileName,
    const EncodingParameters& param
    )
{
    HRESULT hr = S_OK;

    IMFMediaSource *pSource = NULL;

    EnterCriticalSection(&m_critsec);

    // Create the media source for the device.
	if (pActivate)
	{
		// If video capture device exists
		hr = pActivate->ActivateObject(
			__uuidof(IMFMediaSource),
			(void**)&pSource
			);

		// Get the symbolic link. This is needed to handle device-
		// loss notifications. (See CheckDeviceLost.)

		if (SUCCEEDED(hr))
		{
			hr = pActivate->GetAllocatedString(
				MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_SYMBOLIC_LINK,&m_pwszSymbolicLink,
				NULL
				);
		}
	}
	else
	{
		// Otherwise HARD-CODE read from file
		//m_bFileSource = TRUE;
		//m_bEOF = FALSE;

		hr = CreateMediaSource(L"M:\\SPIIRCAM\\Doc\\Video\\Media Foundation SDK\\Play\\PLAYDATA\\MVI_2620.wmv", &pSource);

		// Since we're reading from a file and not a device, don't need to handle device-loss notifications
		//unnecessary//m_pwszSymbolicLink = NULL;
	}

    if (SUCCEEDED(hr))
    {
        hr = OpenMediaSource(pSource);
    }

    // Create the sink writer
    if (SUCCEEDED(hr))
    {
        hr = MFCreateSinkWriterFromURL(
            pwszFileName,
            NULL,
            NULL,
            &m_pWriter
            );
    }

    // Set up the encoding parameters.
    if (SUCCEEDED(hr))
    {
        hr = ConfigureCapture(param);
    }

    if (SUCCEEDED(hr))
    {
        m_bFirstSample = TRUE;
        m_llBaseTime = 0;

        // Request the first video frame.

        hr = m_pReader->ReadSample(
            (DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM,
            0,
            NULL,
            NULL,
            NULL,
            NULL
            );
    }

    SafeRelease(&pSource);
    LeaveCriticalSection(&m_critsec);
    return hr;
}

2) I also had to enable the "Start Capture" button in this case, with no capture devices.  Below is an excerpt from winmain.cpp, where my ONLY change was that one line where I put in "TRUE" to the EnableDialogControl() call.

//-----------------------------------------------------------------------------
// UpdateUI
//
// Updates the dialog UI for the current state.
//-----------------------------------------------------------------------------

void UpdateUI(HWND hDlg)
{
    BOOL bEnable = (g_devices.Count() > 0);     // Are there any capture devices?
    BOOL bCapturing = (g_pCapture != NULL);     // Is video capture in progress now?

    HWND hButton = GetDlgItem(hDlg, IDC_CAPTURE);

    if (bCapturing)
    {
        SetWindowText(hButton, L"Stop Capture");
    }
    else
    {
        SetWindowText(hButton, L"Start Capture");
    }

    EnableDialogControl(hDlg, IDC_CAPTURE, TRUE/*bCapturing || bEnable*/);

    EnableDialogControl(hDlg, IDC_DEVICE_LIST, !bCapturing && bEnable);

    // The following cannot be changed while capture is in progress,
    // but are OK to change when there are no capture devices.

    EnableDialogControl(hDlg, IDC_CAPTURE_MP4, !bCapturing);
    EnableDialogControl(hDlg, IDC_CAPTURE_WMV, !bCapturing);
    EnableDialogControl(hDlg, IDC_OUTPUT_FILE, !bCapturing);
}



How to get frame rate (fps) using MF API

$
0
0

Hi,

Are there any  functions in Media Foundation API similar to IQualProp Interface functions to get playback frame rate?

I have looked at MF SDK player samples and Media Foundation Interface functions and could not find the information.

Thanks,

Jirong

 

 


 

How to use Media Foundation to capture a webcam shot (still) to image file?

$
0
0
Right now I changed MFCaptureToFile sample to get a short (of 1-2 frames) wmv clip and then by separate Video Decompiler program extract the first frame to an image (jpg) file.
Can it be done somehow more directly within MF? Phrasing differently - can an (jpg) image file be set as a SINK (IMFMediaSink) for MF ?
Thank you very much

Need more information on video formats (memory representation) and IMF2DBuffer

$
0
0

In my application, I capture video using the Media Foundation APIs and convert it to a common format. I need some further clarification regarding the format (memory representation) of a captured video frame exposed through the IMFMediaBuffer and IMF2DBuffer interface. As per the documentation:

"Every video format defines a contiguous or packed representation.
 This representation is compatible with the standard layout of a DirectX surface in system memory, with no additional padding. For RGB video, the contiguous representation has a pitch equal to the image width in bytes, rounded up to the nearest DWORD boundary.
 For YUV video, the layout of the contiguous representation depends on the YUV format. For planar YUV formats, the Y plane might have a different pitch than the U and V planes."

Is there any further discussion or documentation on what contiguity means for the various YUV formats? I'm still relatively new to video processing, so I found articles like this one on YUV render formats very helpful.

It seems like I need to support 3 cases for each format, 2 of which are the same: IMFMediaBuffer (the sample does not expose the 2DBuffer interface), IMF2DBuffer with a contiguous native format, and IMF2DBuffer with a non-contiguous native format. If I read correctly, it seems that an IMFMediaBuffer and a IMF2DBuffer that are contiguous are basically the same thing.

So, for either the contiguous or non-contiguous representation, I need to know the orientation and stride of the image in order to process it.

Right now I am trying to support capture in YUY2, RGB24, and I420. 




How to capture raw format image using media foundation?

$
0
0

Hello,

I am new to MediaFoundation's video capture API. But I have an app that performs a video capture preview of a webcam. I picked up most of the ideas from this sample code 

CaptureEngine video capture sample

I am facing problem in saving the video buffer as raw format from Video stream. Iam trying this following way.

Using IMFCapturePhotoSink, I captured one frame from video stream.There is no raw container support in 
capture engine.So,i registered the callback using IMFCapturePhotoSink->SetSampleCallback() to retrieve 
buffer from video source.

After registered the callback,IMFCaptureEngineOnSampleCallback::OnSample() method receives a sample.
OnSample method is not called when MJPG video streaming. But I received a sample when streaming in YUY2 format. Below code for your reference.

HRESULT CTakePhoto()
{
    IMFCaptureSink *pSink = NULL;
    IMFCapturePhotoSink *pPhoto = NULL;
    IMFCaptureSource *pSource;
    IMFMediaType *pMediaType = 0;

    // Get a pointer to the photo sink.
    HRESULT hr = m_pEngine->GetSink(MF_CAPTURE_ENGINE_SINK_TYPE_PHOTO, &pSink);
    if (FAILED(hr))
    {
        goto done;
    }

    hr = pSink->QueryInterface(IID_PPV_ARGS(&pPhoto));
    if (FAILED(hr))
    {
        goto done;
    }

    hr = m_pEngine->GetSource(&pSource);
    if (FAILED(hr))
    {
		PrintDebug(L"GetSource : GetLastError() = %d\r\n",GetLastError());
        goto done;
    }

	hr = pSource->GetCurrentDeviceMediaType(1, &pMediaType);     		// 1 is image stream index
    if (FAILED(hr))
    {
        goto done;
    }

    hr = pPhoto->RemoveAllStreams();
    if (FAILED(hr))
    {
		PrintDebug(L"RemoveAllStreams failed 0x%x %d\r\n",hr,GetLastError());
        goto done;
    }

    DWORD dwSinkStreamIndex;
	// Try to connect the first still image stream to the photo sink
	hr = pPhoto->AddStream((DWORD)MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_PHOTO,  pMediaType,NULL, &dwSinkStreamIndex);
	if(FAILED(hr))
	{
		goto done;
	}

	//Register the callback
	hr = pPhoto->SetSampleCallback(&SampleCb);
	if (FAILED(hr))
	{
		goto done;
	}

    hr = m_pEngine->TakePhoto();
    if (FAILED(hr))
    {
        goto done;
    }

done:
    SafeRelease(&pSink);
    SafeRelease(&pPhoto);
    SafeRelease(&pSource);
    SafeRelease(&pMediaType);
    return hr;
}

//Its a class
STDMETHODIMP CSampleCallback::OnSample(_In_ IMFSample *pSample)
{
	if(pSample != NULL)
	{

		hr = pSample->ConvertToContiguousBuffer(&pMediaBuffer);
		if(FAILED(hr))
		{
			OutputDebugString(L"ConvertToContiguousBuffer failed\r\n");
			return S_FALSE;
		}

		hr = pMediaBuffer->Lock(&pBuffer,0,&dwBufLen);
		if(FAILED(hr))
		{
			OutputDebugString(L"Lock failed\r\n");
			return S_FALSE;
		}

		hr = pMediaBuffer->Unlock();
		if(FAILED(hr))
		{
			OutputDebugString(L"Unlock failed\r\n");
			return S_FALSE;
		}

		SafeRelease(&pMediaBuffer);
	}
	return S_OK;
}

Why cant i receive a sample when MJPG video streaming?Why this behaviour occurs?
Did i miss anything to configure to receive a sample?Can you help me to sort out this problem?

Thanks in advance.

Regards,

Ambika


MFVideoFormat_v210 not supported?

$
0
0

I have a camera which outputs in this format. But sink writer fails for SetInputMediaType call with Invalid Media Type error.

Is this format not supported? Please advise.

Hardware accelerated SinkWriter Example

$
0
0

Does anyone know of, or have a sample of their own they will share, that illustrates how to use MF to create and use a SinkWriter to hardware encode NV12 input to h264?

I can get my code working in software, but I need to utilize hardware encoding resources to lower my cpu usage.  I am working on a Dell Latitude E6450 with both an AMD Radeon HD 8790M, and an Intel Core I7-4800MQ CPU. Both are supposed to support hardware encoding, and have MFTs. 

I would greatly appreciate tips/advice/code samples to correct this problem.

 mftrace shows the following:

For the amd gpu:

11308,EB0 15:22:09.34586 COle32ExportDetours::CoCreateInstance @ Failed to create {ADC9BC80-0F41-46C6-AB75-D693D793597D} AMD H.264 Hardware MFT Encoder (C:\Program Files\Common Files\ATI Technologies\Multimedia\AMDh264Enc32.dll) hr=0x80004005 E_FAIL
11308,EB0 15:22:09.34587 COle32ExportDetours::CoCreateInstance @ call to 'hr' failed (hr=0x80004005) at avcore\mf\samples\mf360\mftrace\mfdetours\otherdetours\ole32exportdetours.cpp:116
11308,EB0 15:22:09.34587 COle32ExportDetours::CoCreateInstance @ - exit (failed hr=0x80004005 E_FAIL)
11308,EB0 15:22:09.34587 CMFActivateDetours::ActivateObject @0078D1C0 call to 'FindDetouredVtbl( This->lpVtbl )->ActivateObject( This, riid, ppv )' failed (hr=0x80004005) at avcore\mf\samples\mf360\mftrace\mfdetours\interfacedetours\mfactivatedetours.cpp:440
11308,EB0 15:22:09.34587 CMFActivateDetours::ActivateObject @0078D1C0 - exit (failed hr=0x80004005 E_FAIL)

For the Intel cpu/gpu:

6332,CA4 15:24:04.14466 COle32ExportDetours::CoCreateInstance @ Created {4BE8D3C0-0515-4A37-AD55-E4BAE19AF471} Intel® Quick Sync Video H.264 Encoder MFT (C:\Program Files\Intel\Media SDK\mfx_mft_h264ve_w7_32.dll) @07AD3A00 - traced interfaces: IMFTransform @07AD3A00,
6332,CA4 15:24:04.14466 COle32ExportDetours::CoCreateInstance @ - exit
6332,CA4 15:24:04.14466 CMFActivateDetours::GetGUID @0040D218 - enter
6332,CA4 15:24:04.14466 CMFActivateDetours::GetGUID @0040D218 - exit
6332,CA4 15:24:04.14466 CMFActivateDetours::GetUnknown @0040D218 - enter
6332,CA4 15:24:04.14466 CMFActivateDetours::GetUnknown @0040D218 attribute not found guidKey = MFT_FIELDOFUSE_UNLOCK_Attribute
6332,CA4 15:24:04.14611 CMFActivateDetours::GetUnknown @0040D218 - exit (failed hr=0xC00D36E6 MF_E_ATTRIBUTENOTFOUND)
6332,CA4 15:24:04.14611 CMFActivateDetours::GetGUID @0040D218 - enter
6332,CA4 15:24:04.14611 CMFActivateDetours::GetGUID @0040D218 - exit
6332,CA4 15:24:04.14611 CMFActivateDetours::GetUnknown @0040D218 - enter
6332,CA4 15:24:04.14611 CMFActivateDetours::GetUnknown @0040D218 attribute not found guidKey = MFT_PREFERRED_ENCODER_PROFILE
6332,CA4 15:24:04.14611 CMFActivateDetours::GetUnknown @0040D218 - exit (failed hr=0xC00D36E6 MF_E_ATTRIBUTENOTFOUND)
6332,CA4 15:24:04.14611 CMFActivateDetours::GetUnknown @0040D218 - enter
6332,CA4 15:24:04.14612 CMFActivateDetours::GetUnknown @0040D218 attribute not found guidKey = MFT_PREFERRED_OUTPUTTYPE_Attribute
6332,CA4 15:24:04.14612 CMFActivateDetours::GetUnknown @0040D218 - exit (failed hr=0xC00D36E6 MF_E_ATTRIBUTENOTFOUND)
6332,CA4 15:24:04.14612 CMFActivateDetours::GetUINT32 @0040D218 - enter
6332,CA4 15:24:04.14612 CMFActivateDetours::GetUINT32 @0040D218 - exit
6332,CA4 15:24:04.14612 CMFPlatExportDetours::MFGetMFTMerit @ - enter
6332,CA4 15:24:04.19958 CMFPlatExportDetours::MFGetMFTMerit @ Merit validation failed for MFT @07AD3A00 (hr=80004005 E_FAIL)
6332,CA4 15:24:04.19958 CMFPlatExportDetours::MFGetMFTMerit @ - exit (failed hr=0x80004005 E_FAIL)
6332,CA4 15:24:04.20529 CMFActivateDetours::ActivateObject @0040D218 call to 'FindDetouredVtbl( This->lpVtbl )->ActivateObject( This, riid, ppv )' failed (hr=0x80004005) at avcore\mf\samples\mf360\mftrace\mfdetours\interfacedetours\mfactivatedetours.cpp:440
6332,CA4 15:24:04.20529 CMFActivateDetours::ActivateObject @0040D218 - exit (failed hr=0x80004005 E_FAIL)

Video Capture on recent Windows 8.1 Tablets shows very dark video

$
0
0

Hello,

I switched from direct show to media foundation to capture video from webcams. It is a desktop application and works well with both direct show and media foundation on Windows 7 and Windows 8.1 desktop computers for a lot of different webcams.

Trying the same application on a Windows 8.1 Atom based tablet, the video is very dark and green.

I tried it on the following tablets (all of them show the above described behavior):

-Acer T100A (camera sensor MT9M114, Atom 3740)

-Dell Venue Pro 11 (camera sensor OV2722 front, IMX175 back - Atom 3770)

-HP Omni 10 5600 (camera sensor OV2722, IMX175 - Atom 3770)

I capture using IMFMediaSession, building a simple topology with a media source and the EVR.

  • TopoEdit shows the same strange behavior
  • MFTrace does not show any errors (at least I do not see any errors)
  • In case an external usb camera is used on all these tablets, the video is fine.
  • The SDK Sample MFCapture3d3 works fine, it uses the source reader for capturing - I verified the media type of the source used there, it is the same I use in my application (same stream descriptor, same media type, verified with mftrace)
  • The "CaptureEngine" video capture sample from the SDK also works as expected, however, I need Windows 7 compatibility and would like to use the same source on both platforms
  • When using direct show, all the above mentioned tablets show only a fraction of the sensor image when capturing with lower resolutions (e.g. 640x360), the colors of the video are fine. I tried it with the desktop app of Skype and GraphEdit, same behavior (only a fraction of the video is shown, colors are fine) - Skype for destkop apparently uses a DirectShow source filter.

Has anyone tried capturing the camera of an Atom z3700 series tablet with media foundation using the media session? If so, is special handling of the media source required on these tablets?

If required, I will post some code or mftrace logs.

Thanks a lot,

Karl


Bluemlinger






Media Foundation ::How can i limit the size of .dat file in temporary internet folder,created during streaming of channels?

$
0
0

Problem ::In my Windows application when i do streaming from TV to Application,it start buffering data in temporary internet folder in the form of .dat file.Size of this file keeps on increasing until streaming is running.In an hour size of file approx. reaches to 1.0 GB.

I am using media foundation technique and use URL for channel streaming.

Question: How i can set limit on .dat file or what is the way to delete the file after a fix size limit.so that i can save my C drive space.I want to delete this file during streaming or reuse the .dat file after a fixed size.

or this buffering mechanism is required for streaming??

Problem with network source property

$
0
0

When I get network source property follow the windows sdk

sample code, I found some issue with the code:

 

PROPERTYKEY key;
  key.fmtid = MFNETSOURCE_STATISTICS;
  key.pid = MFNETSOURCE_PROTOCOL_ID;

 

If I set PROPERTYKEY variable like the code above, I can get a value from IPropertyStore::GetValue().

  

PROPERTYKEY key;
  key.fmtid = MFNETSOURCE_PROTOCOL;
  key.pid = 0;

 

But if I set PROPERTYKEY variable like the code above, I can not get the value from IPropertyStore::GetValue().

 

The two IPropertyStore pointers are the same and queryed from IMFMediaSource pointer, also, the codes are both provided by windows sdk, why one can work well but the other not?

 

Did anyone met this problem, and can anyone do me a favor?

thanks a lot!

Can't change Properties of axwindowsmediaplayer on second form

$
0
0

I have a C# Forms Application which uses an instance of axwindowsmediaplayer, created at design time, which works fine.

I'm now trying to add another media player to a different form in the same application and I can't change any of the properties on the media player. For example, clicking on the arrow to expose options for a boolean property doesn't show a list showing true/ false, and trying to click on the Name property to rename it causes a tooltip showing the default name(which is axWindowsMediaPlayer1) to pop-up obscuring the Name property textbox.

I'm probably missing something very obvious, has anyone else encountered this?

Thanks in advance.


Sequencer Source MF_TOPOLOGY_PROJECTSTART not being honoured

$
0
0

I have two audio files:

093500.wma
MF_TOPOLOGY_PROJECTSTART: 102310000

095100.wma
MF_TOPOLOGY_PROJECTSTART: 9701530000

Filenames are HHMMSS in UTC.

If I seek in my UI timeline to 10:34:50 and start playback, there is a ten second pause before the first segment starts, as expected.  However, the second segment starts immediately after the end of the first, when the second segment should actually start 00:16:10s after the start of playback.

Session GLOBAL_TIME is set.

Topology_Flags_Last is set on the second segment, and not the first.

Any ideas what's incorrect?




Rate control in custom source

$
0
0

I am implementing a custom source to admit h264 files directly in the Media Foundation pipeline. The custom source works Ok using the MFPlayback2 sample to embed the custom source.

A step forward is to implement rate control and support for the custom control. To do that, the custom source implements the interfaces IMFGetService, IMFRateSupport, and IMFRateControl. I use the method QueryInterface of the custom media source to return an interface for IMFGetService. For getting an interface for IMFRateControl, I use the the method GetService, as follows:

IFACEMETHODIMP NVRSource::GetService(REFGUID guidService, REFIID riid, LPVOID *ppvObject)
{
    HRESULT hr = MF_E_UNSUPPORTED_SERVICE;

    if (guidService == MF_RATE_CONTROL_SERVICE)
    {
        if (riid == IID_IMFRateControl)
        {
            *ppvObject = static_cast<IMFRateControl*>(this);
            hr = S_OK;
        }
        if (riid == IID_IMFRateSupport)
        {
            *ppvObject = static_cast<IMFRateSupport*>(this);
            hr = S_OK;
        }
    }
    return hr;
}

The first time the application enters this method for getting an IMFRateSupport interface, it crashes because a stack problem. If I try to put the call to the IMFRateSupport interface in QueryInterface, the application doesn't crash, but no interface is returned.

Any idea of what is happening? Thanks a lot in advance.

Generate managed wrapper for Media Foundation

$
0
0
MF team has providedMFManagedEncode which is written in C#, inside the source they have created wrapper for a lot of MF interfaces but not all interfaces, is there have any tool can generate the managed wrapper for rest interfaces automatically? if no, i need write the wrapper by myself, but where can i find the GUID of COM and the IID of the interface? MSDN just mentioned the interface belong which lib, for example, what's the IID ofIMFMediaStream interface?

Topoedit error: Topoedit.exe is not a valid win32 application

$
0
0

Hi,

I have recently downloaded and installed the Windows SDK to start some Windows desktop media application development. If it helps give context for an answer, I only have Windows 7 so I downloaded that version of the software, not the latest Windows 8 version.

To get things started I wanted to experiment with building some topologies using Topoedit. However, when I tried to run the application the following error message immediately appeared: Topoedit.exe is not a valid win32 application. Consequently I could not use it. None of the different topoedit versions (for different architectures as well) that downloaded with the SDK worked. I did however manage to use graphedit.exe ok, but I would still like to validate my topology in topoedit.

So my question is: How can I fix my error so that I can use topoedit? Also, is it strictly necessary to use topoedit if I can still get graphedit to work, will it have an impact on my application design / debugging capabilities?

Thanks in advance.


How to get DTS of a frame

$
0
0

Hi,

I'm using H.264 MFT to do encoding and mux encoded frames myself into a MP4 file.

I could get the timestamp (presentation time) using IMFSample->GetSampleTime(&iSampleTime),

But to mux frames correctly I need to know DTS of the frames (it could be different from PTS).

Could anyone tell me how can I get DTS from a IMFSample? 

Thanks

The sample link for MFDub no longer exists in the Media Foundation Team Blog

$
0
0

I'm looking for the MFDub sample mentioned in the Media Foundation Team Blog. However, the link doesn't work. Could someone please point me to the current source code location to where this and the other samples mentioned in the blog has gone off to?

Mark

Adding a SampleGrabber to the MF_BasicPlayback example, No errors but SampleGrabber does not callback?

$
0
0

Hi,

I have modified the MF_BasicPlayback example from the SDK to include the code from the Using the Sample Grabber Sink example.

I have inserted a tee into the video stream so the source node feeds a tee which is connected to the renderer output node and the samplegrabber node.  The code runs and plays a video but the samplegrabber callback never gets called.

I am struggling to find where the error is without the joys of Graphedit to see if it has correctly connected up.

Any ideas?

I have modifed AddBranchToPartialTopology with the below code, I use the provided SampleGrabberCB class from the Microsoft example.

Thanks for any help.

Mike

if (majorType == MFMediaType_Video && fSelected) {// Configure the media type that the Sample Grabber will receive. // Setting the major and subtype is usually enough for the topology loader // to resolve the topology. CHECK_HR(hr = MFCreateMediaType(&pType)); CHECK_HR(hr = pType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video)); CHECK_HR(hr = pType->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_ARGB32));

// Create the sample grabber sink. CHECK_HR(hr = SampleGrabberCB::CreateInstance(&m_pSampleGrabber)); CHECK_HR(hr = MFCreateSampleGrabberSinkActivate(pType, m_pSampleGrabber, &pSinkActivate)); // To run as fast as possible, set this attribute (requires Windows 7): //CHECK_HR(hr = pSinkActivate->SetUINT32(MF_SAMPLEGRABBERSINK_IGNORE_CLOCK, TRUE)); DWORD streamID; pSourceSD->GetStreamIdentifier(&streamID); CHECK_HR(hr = AddOutputNode(pTopology, pSinkActivate, 0, &pSampleGrabberNode)); // if I put in StreamID it throws an error.

CHECK_HR(hr = MFCreateTopologyNode(MF_TOPOLOGY_TEE_NODE, &pTeeNode)); CHECK_HR(hr = pTopology->AddNode(pTeeNode)); CHECK_HR(hr = pSourceNode->ConnectOutput(0, pTeeNode, 0)); CHECK_HR(hr = pTeeNode->ConnectOutput(0, pSampleGrabberNode, 0)); CHECK_HR(hr = pTeeNode->ConnectOutput(0, pOutputNode, 0)); } else { // Connect the source node to the output node. hr = pSourceNode->ConnectOutput(0, pOutputNode, 0); }


Test camera works with amcap (directshow) but not MFCaptureD3D(media foundation) - 0x80070491 on IMFSourceReader::ReadSample callback

$
0
0

I have a test camera that works with amcap (directshow) but does not work with MFCaptureD3D(media foundation).

The error I get is: 0x80070491 "There was no match for the specified key in the index"

on IMFSourceReader::ReadSample callback function "OnReadSample"

What might the camera be missing in order to work correctly with Media Foundation?

If I want to give feedback to the camera module maker on what they need to do to work with Media Foundation, where can I direct them?

Media Foundation functionality vs. DirectShow - video crossfading, playback rates

$
0
0
I'm researching the possibility of upgrading some software from DirectShow to Media Foundation, enticed by the possibilities of h.264 support and improved performance via hardware acceleration. I'm also worried that Microsoft has deprecated DirectShow Editing Services, which is an integral part of the current implementation, and I don't want to get caught in a situation where a new version of Windows gets released that breaks the software.

However, I'm having trouble determining whether or not all the features in our DirectShow video implementation are actually attainable in Media Foundation. I obviously don't want a situation in which a video "upgrade" actually results in reduced functionality. Though video's not a major component of this software, it has some basic video editing capabilities that I'd need to be able to replicate in a Media Foundation conversion. Here are the base requirements:

1) play multiple video streams in a single session
2) crossfade between overlapping video streams
3) not stop playback when there's a gap between video streams or a gap between the last video stream and the actual end of the timeline (related to #7 below)
4) set playback rate of each video stream independently 
5) ability to crop beginning and end of each video stream independently (i.e., have one video start two seconds in instead of at the beginning, chop 3 seconds off the end of another video, etc.)
6) playback using a master clock provided by our software rather than by MF
7) receive each frame before it gets displayed so that I can add effects, still images, and text (currently done via a transform filter in DS)
8) rendering the above to a video file
9) ability to work in both 32-bit and 64-bit apps

(2) and (4) appear to be where the difficulties arise in Media Foundation. I've read forum discussions at https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/b7323e9b-0adf-457c-8345-7c3cb8b190c8/howto-mix-two-videoaudio-streams?forum=mediafoundationdevelopment and https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/9a4390f7-d99e-44bd-9106-dd58344f850c/how-to-mix-two-streams-in-evr?forum=mediafoundationdevelopment that seem to indicate that these tasks are extremely complex, if possible at all, in Media Foundation. 

These posts are several years old, though, and I'm not sure whether or not this is the current state of the art in Media Foundation. Are there MF experts or insiders out there who can weigh in on whether or not the above functionality can realistically be achieved with Media Foundation, and whether or not the techniques described in these posts are still the best available methods? If matters are still the same, can we expect support for this functionality in future versions of MF, and if so, when? If MF isn't a viable solution, what tools are people using to actually implement high-performance video editing on Windows? 

Thanks for your help.


Viewing all 1079 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>